The past, present, and future industry prospects of virtual reality (VR) and augmented reality (AR) are presented. The future of VR/AR technology based on holographic display is predicted by analogy with the VR/AR based on binocular vision display and light field display. The investigations on holographic display that can be used in VR/AR are reviewed. The breakthroughs of holographic display are promising in VR/AR with high resolution. The challenges faced by VR/AR based on holographic display are analyzed.
© 2018 Optical Society of America
1. INTRODUCTION OF VIRTUAL REALITY AND AUGMENTED REALITY
Virtual reality (VR) has recently been seen as futuristic, but it is not a new concept. In the 1990s, VR technology was flourishing with the explosion of early 3D display technology. However, due to the limited display quality, high price, large time delay, and other shortcomings, VR technology did not cause much impact. After entering the new millennium, with the strengthening of hardware, VR devices with higher display quality and less time delay became possible.
With the vigorous development of VR, augmented reality (AR) is also gradually emerging. Nowadays, VR and AR are often mentioned together, but they are quite different concepts. VR technology allows users to be in a completely virtual world [1–5], including games, movies, and live sports. AR technology allows users to see both real scenes just in front of them and some virtual objects that do not exist in the real-world at the same time [6–12]. VR devices most often use opaque display elements to present a virtual world, while AR devices use optical combiners and display components to mix real-world and virtual objects. Due to their revolutionary user experiences, VR and AR are considered to be the most likely “next-generation computing platform” after PCs and smartphones . Thus, they are receiving more and more attention.
In addition to the concepts of VR and AR, various new related concepts have emerged in recent years, such as mixed reality (MR) [14,15] and extended reality (XR) [16,17]. A comparison of these concepts is shown in Table 1. Essentially, MR is another version of AR; it allows users to interact with virtual information displayed in the real-world. It could make the user experience more realistic. XR is a brand new concept that could cover VR, AR, and MR technologies. With XR devices, users could switch freely among different modes, including VR, AR, and MR, as shown in Fig. 1. In the future, devices would be able to intelligently switch from one mode to another.
Although today’s VR/AR industry has made great progress, it is still at the preliminary stage of development. Many shortcomings have not been completely overcome, including high cost, lack of content, and limited display quality. Luckily, the cost reduction, size reduction, and performance improvement of display devices and sensors caused by the development of the smartphone industry have established a solid foundation for further improvements of VR/AR. The VR/AR industry will certainly usher in faster developments. According to Goldman Sachs , the value created by the VR/AR industry will mainly concentrate on four aspects: high-quality display technologies, high-performance processors, movement-tracking systems, and haptic feedback devices.
In this work, holographic display, as one of high-quality display technologies, is what we are most concerned about. Holographic display technology is a type of technique that realizes 3D reconstruction by using interference fringes. VR/AR devices based on holographic display have not yet been commercialized. However, these kinds of devices could provide all types of 3D visual perceptions without convergence-focusing conflict. They are currently receiving more and more attention. The future of VR/AR based holographic display is predicted by analogy with VR/AR based on binocular vision display and light field display.
2. BINOCULAR VISION DISPLAY IN VR AND AR
Binocular vision display technology [18,19] has been used in various VR/AR products such as Sony PlayStation VR, Oculus Rift, HTC Vive, etc. HTC Vive Pro is one of the best pieces of commercialized VR equipment based on binocular vision display. It has a resolution of pixels per eye, a refresh rate of 90 Hz, and a field of view (FOV) of 110 deg.
The principle of binocular vision in VR devices is shown in Fig. 2. Two screens (or different parts of a single screen) in the VR device display two slightly different images. These two images pass through the optical elements and reach the left and right eyes, respectively. Two images are fused by the human’s brain and 3D immersion experience could be obtained.
The AR devices based on binocular vision display are slightly different from VR devices. Users can see real scenes while watching the virtual images . The commonly used method to achieve this goal is designing an optical path to make the screen not coincide with the real-world view-window. Besides, using transparent optical elements to ensure that both virtual images and real scenes can reach human eyes is another excellent option . Figure 3 shows an AR device that uses a waveguide and a holographic optical element (HOE). Both real scenes and virtual images can be seen by such an AR system.
VR/AR devices based on binocular vision display have a higher resolution and a larger viewing angle. However, since the convergence point of the human eye does not coincide with the focus point, convergence-accommodation conflicts  would appear in this type of VR/AR device. Viewing with this type of device for a long time is subject to dizziness and fatigue . Moreover, the 3D image displayed by this type of device is a pseudo 3D image synthesized by the brain. The display effect of the 3D image is different from that of real objects. The improvement directions of VR/AR devices based on binocular vision display are to reduce convergence-accommodation conflicts, improve 3D effects, and increase resolution continuously. In order to achieve these goals, gyroscopes, accelerometers, and magnetometers are often employed by this type of device.
3. LIGHT FIELD DISPLAY IN VR AND AR
The VR/AR devices such as Magic Leap One use light field display technology [6,23,24–36] to give users 3D immersion. Magic Leap One has a resolution of about pixels per eye, a refresh rate above 60 Hz, and a FOV of 40 deg in the horizontal direction and 30 deg in the vertical direction .
Light field display is a technique wherein the depth information of the radiation field can be displayed by arrays of pixels instead of single pixels. There are many methods to achieve light field display, including the layer-based method , shutter-based method [24,25], integrated imaging method , and directional backlight method . The principle of a typical VR device based on light field display  is shown in Fig. 4. Two light rays and , emitted by object point , enter the left and right eyes, respectively. and are two pixels on two separate screens. The direction of can be expressed by the positions of and . Meanwhile, the intensity of can be expressed by the transmittances of and . Similarly, can be expressed by the pixels and . In this way, every point in the space can be represented by arrays of pixels.
Similar to AR devices based on binocular vision display, AR devices based on light field display also mainly use two methods to combine virtual images with real scenes. One method is designing a special optical path to separate the real scene’s light path from the virtual scene’s light path. The other method, adopted by Microsoft and Magic Leap [15,34], is employing waveguides and transparent optical elements to mix the real scene and virtual scene together.
VR/AR devices based on light field display can display dynamic 3D images without using a coherent light source. Compared to devices based on binocular vision, devices based on light field display have better occlusion effects. But the resolution of this type of VR/AR device is lower, and the depth range and viewing angle are smaller. Meanwhile, this type of device needs a high-performance GPU to provide the demanding level of computing power, which makes them large and heavy. Moreover, although convergence-accommodation conflict in VR/AR devices based on light field display is much weaker than that in VR/AR devices based on binocular vision display, this problem still has not been solved perfectly. The improvement directions of this type of VR/AR device include increasing resolution, expanding the depth range and viewing angle, reducing convergence-accommodation conflicts, and using split type design to reduce the size and weight of the head-mounted parts.
4. HOLOGRAPHIC DISPLAY IN VR AND AR
A. Computer-Generated Holographic Technology in VR and AR
Holographic display is a technique that realizes 3D reconstruction by using interference fringes. All of the information, including amplitude and phase information, can be reconstructed by interference fringes under the illumination of the reference light . Traditional holographic reconstruction is realized by photosensitive materials. However, photosensitive materials cannot be written and erased repeatedly. Besides, holographic display systems based on photosensitive materials could be easily influenced by vibration. Thus, traditional holographic technology is not suitable for VR/AR.
With the rapid development of computer technology, holograms can now be calculated by algorithms . In order to display computer-generated holograms (CGHs), a spatial light modulator (SLM) is employed. Compared to traditional holographic technology, CGH has three advantages. First, holograms are generated by computer rather than by interference in photosensitive materials. The unfavorable effects of the experimental environment and operating factors on the quality of the holograms can be avoided. Second, compared to optical holograms, CGH is easier to duplicate and distribute. Third, CGH can record the information of both real objects and non-existent objects produced by Auto CAD, SolidWorks, and other 3D modeling software. Nowadays, VR/AR devices based on CGH display have received more and more attention.
An AR system based on CGH technology [39–42] is shown in Fig. 5. The CGH of an image to be displayed is uploaded onto the SLM via the driver. When the reference light illuminates the SLM, the diffracted light can reach human eyes through one direction of the beam splitter (BS). The real scenes of the outside world can be viewed through another direction of the BS.
The VR/AR system based on CGH technology has some outstanding advantages. It has the potential of compact design [43,44]. Users can get brilliant viewing effect because there is no crosstalk and no depth reversal . It has good stability because there are no mechanical parts in the system . Therefore, it is considered to be the ideal solution for VR/AR devices. A comparison of VR/AR devices based on binocular vision, light field display, and holographic technology is shown in Table 2. Obviously, only VR/AR devices based on holographic display can provide all types of 3D visual perceptions without the convergence-focusing conflict.
Recently, many investigations on VR/AR based on holographic display have been carried out. In 2014, Moon et al. proposed a color holographic near-eye display system using an LED as the light source . In order to make the structure more compact, the LED light source is coupled into the multimode fiber and transmitted to the SLM. In 2015, Chen et al. proposed a head-mounted display system . The CGH loaded onto the SLM is reconstructed in the position of the human eye by a system. Users can interact with the images displayed by this system. In 2017, Gao et al. proposed a perspective 3D near-eye display system . Holographic grating is used as a frequency filter in this system to improve the display quality. Maimone et al. also proposed a VR/AR system based on holographic display . The advantages of this system include a high resolution, a large FOV, and changeable reconstruction distance. In 2018, Sun et al. proposed a double-convergence light Gerchberg–Saxton algorithm used in a holographic VR/AR display system . With this algorithm, reconstructed images with 180 cm zooming range and continuous depth cues can be obtained. Meanwhile, noise in this system is reduced to a pretty low level. Hong et al. proposed a holographic display system for see-through AR display . This system has a large FOV by using a transmission type optical floater, which is an index-matched anisotropic crystal lens. Hong et al. proposed a novel electro-holographic display that can provide 3D images near the fovea and 2D images on the periphery . In this way, the convergence-accommodation conflict of the near-eye display can be eliminated.
Although VR/AR devices based on holographic display are developing rapidly, there are still many shortcomings that need to be overcome. First, VR/AR devices based on holographic display are often influenced by speckle. Most of the VR/AR devices based on holographic display should be illuminated by coherent light sources. Coherent illumination causes speckle . Many methods have been proposed to reduce the speckle, such as phase calculation [55,56] and ray-sampling . Second, the performance of the existing platform makes it difficult to support real-time holographic display with high display quality. Speeding up the calculation of CGH and improving the reconstruction quality are the most important issues in VR/AR devices based on holographic display. Thus, advanced algorithms should be developed. Third, due to the limited space bandwidth product (SBP) of the SLM, the size and the FOV of the reconstructed images in VR/AR devices based on holographic display are pretty small. New systems, elements, and materials are needed to expand the SBP of holographic VR/AR devices.
B. Challenges of Algorithms in VR/AR Based on Holographic Technology
The point-cloud method [58–60] is the most commonly used method for CGH generation. In this methods, the 3D object is regarded as a number of points. The interference fringe of each point on the holographic plane is calculated separately. The CGH of the 3D object is the superposition of the interference fringe of every point. Huge amounts of data and a repeating calculation process make this method time-consuming. The look-up table (LUT) [61–67] method is a useful option to reduce the amount of calculation needed by the point-cloud method. This method stores every interference fringe formed by points in the object space in the LUT. When calculating the CGH of a 3D object, it is only necessary to match points of the object with the results in the LUT. This method gets rid of the repeated calculation process and accelerates the generation speed of the CGH. However, this method has pretty high requirements for large storage capacity and a high data transfer rate. In addition to the LUT method, the rapid developments of hardware-acceleration techniques such as OpenCL , field-programmable gate array  and distributed parallel processing  have also created conditions for the acceleration of the point-cloud method.
The surface-panel method (polygon-based method) [71–77] is also commonly seen for CGH generation. Similar to the point-cloud method, the surface-panel method regards the 3D object as a number of surface panels. The coherent distribution of each surface panel is calculated separately. The CGH of the 3D object is calculated by the superposition of these coherent distributions. The surface-panel method is faster than the point-cloud method [75,76]. However, it is still difficult to apply in real-time VR/AR devices. Meanwhile, the quality of the reconstruction image based on this method is often seriously affected by undesirable messy fringes. It is necessary to put forth efforts to solve this problem .
The layer-based method is another commonly used method to generate CGHs [78–82]. This method needs to divide the 3D object into a series of layers and then perform a Fourier transform on each layer. However, most of the CGH algorithms based on the layer-based method have paraxial approximation, which reduces the quality of results in near-distance reconstruction. This phenomenon is more rigid in high-numerical-aperture systems. Besides, in this method, the sampling interval of the 3D object is related to the reconstruction distance and the wavelength, which is different from the sampling interval on the hologram plane. It is complex to solve these above-mentioned problems [83–85].
The 3D perspective method [86,87] is a fast CGH-generation method. This method applies the multi-view projection technique into CGH generation. Projections of the 3D object in various directions are obtained through a series of digital cameras. Different angular offsets are superposed onto the obtained projections to indicate the position information of these 2D projections. A Fourier transform is performed on every 2D image with angular offset. The CGH of the 3D object can be obtained by superposing the Fourier transform results of these different 2D images. However, in order to obtain projections of the 3D object in various directions, lens arrays are often used. The size of the lens arrays limits the information acquisition from large 3D objects. Therefore, this method is not suitable for the reconstruction of large 3D objects. The comparison of the above-mentioned algorithms is shown in Table 3.
In order to realize fast reconstruction of high-quality 3D images, the layer-based method is considered to be the best option for VR/AR devices based on holographic display under current hardware conditions. Compared to the point-cloud method and surface-panel method, it is faster in computation and lower in hardware requirements. With the help of a high-performance GPU, VR/AR devices based on this method have the capability of real-time holographic reconstruction. A split type design like Magic Leap One  could be applied in holographic VR/AR devices with GPUs to ensure improved wear comfort. Compared to the 3D perspective method, it has better reconstruction effect for large scale 3D scenes, which could enhance the visual experience. However, the problems of paraxial approximation and sampling interval should be solved to improve the reconstruction quality.
C. Challenges of SBP Expansion in VR/AR Based on Holographic Technology
For VR/AR device based on holographic display, the display quality is also greatly restricted by SLMs. The pixel amount and refresh rate of the SLM determine the SBP of VR/AR devices. SBP limits the total amount of data that the VR/AR devices can present, affecting the resolution of the 3D display.
In order to improve the display quality, researchers use multi-SLM technology to expand the SBP of the display system. In 2008, Hahn et al. expanded the display system’s SBP by using a SLM array . In 2013, Lum et al. realized high-quality holographic display by using an SLM array . In this system, the pixel count of the CGH reached 3.775 Megapixels. In 2016, Lim et al. proposed a holographic display system using four synchronized high-speed digital micro-mirror displays . This system achieves an enlarged 3.2-inch holographic display with a 45-deg viewing angle. Multi-SLM technology could effectively improve the SBP of VR/AR devices and achieve dynamic 3D reconstruction with high resolution. Although this method has greatly improved the display quality of the holographic display system, it could not be applied in VR/AR devices immediately. Bulky brackets are used in the system to hold the SLMs. The driving module and the computing module are large and heavy. In order to apply multi-SLM technology to VR/AR devices, a revolutionary manufacturing process is expected to reduce the pixel size of the SLMs. A split type design like Magic Leap One  should be employed to separate the driving module and the computing module from the display module. High-bandwidth wireless transmission technology needs to be considered to meet the requirements of huge amounts of data.
Besides multi-SLM technology, increasing the refresh rate of SLM is another method to expand SBP. Some researchers use a temporal multiplexing method based on SLM with a high refresh rate to improve the display quality. In 2015, Li et al. proposed a method for enlarging the viewing zone of a holographic display by the temporal multiplexing method . The viewing zone of the holographic display is expanded by 3 times with this method. In 2017, Kim et al. presented a synchronization method for the 360-deg holographic display implemented by a time-division multiplexing technique . In 2018, Ko et al. proposed a method that eliminates speckle artifacts by a high-refresh-rate SLM , which improves the display quality greatly.
In addition to optimizing the system based on existing types of SLMs to expand the SBP of VR/AR devices, developing new materials and elements is another option. In 2008, Tay et al. proposed a large and refreshable holographic display screen based on photorefractive polymer . In 2013, Smalley et al. invented a novel anisotropic SLM based on the leakage mode, which overcame the existing SLMs’ limitations and realized new functions such as polarization selection and diffraction angle expansion . In 2015, Gu et al. proposed a wide-view full-color 3D display material based on graphene material, which could realize multi-wavelength modulation at sub-wavelength scale . In 2016, Wang et al. proposed a holographic display system based on meta-surfaces , whose maximum energy utilization rate could reach 99% and whose pixel size could be as small as 0.59 μm.
These new materials open new doors for VR/AR devices based on holographic display. They are more flexible in size. Thus, they could be used in VR/AR devices with various shapes. The pixel size is much smaller than that in conventional SLM . The resolution of VR/AR devices based on these new materials could be greatly improved. Meanwhile, a small pixel size brings a bigger diffraction angle, which could improve the FOV. However, these new materials cannot be applied in VR/AR devices soon. First, they are unable to be used in the dynamical 3D VR/AR field because of the relatively low refresh rate. Second, the modulation of the transmissivity is not flexible because they do not support electronic addressed modulation at present. Therefore, increasing the refresh rate and developing electronic addressable elements are the main challenges. But viewed optimistically, we can predict that these revolutionary materials and devices will be the most active field of VR/AR in the future. A comparison of the above-mentioned methods for SBP expansion is shown in Table 4.
VR/AR devices based on binocular vision display and light field display suffer from convergence-accommodation conflicts, which lead to dizziness and fatigue. Holographic display can provide all types of 3D visual perceptions without the convergence-focusing conflict, which is considered to be an ideal solution for VR/AR. Currently, slow calculation speed and insufficient SBP are the most important shortcomings restricting the practical application of holographic VR/AR devices. Advanced algorithms should be developed to speed up the calculation of CGH and improve the reconstruction quality of holographic VR/AR devices. Meanwhile, new systems, elements, and materials are needed to expand the SBP of holographic VR/AR devices.
National Natural Science Foundation of China (NSFC) (61775117); National Basic Research Program of China (2013CB32881).
The authors thank the reviewers for their helpful comments and suggestions, which have improved the paper substantially.
1. J. Hecht, “Optical dreams, virtual reality,” Opt. Photon. News 27(6), 24–31 (2016). [CrossRef]
2. C. H. Li, S. H. Lu, S. Y. Lin, T. Y. Hsieh, K. S. Wang, and W. H. Kuo, “Invited paper: ultra-fast moving-picture response-time LCD for virtual reality application,” SID Int. Symp. Dig. Tech. Pap. 49, 678–680 (2018). [CrossRef]
3. C. Vieri, G. Lee, N. Balram, S. H. Jung, J. Y. Yang, S. Y. Yoon, and I. B. Kang, “An 18 megapixel 4.3” 1443 ppi 120 Hz OLED display for wide field of view high acuity head mounted displays,” J. Soc. Inf. Disp. 26, 314–324 (2018). [CrossRef]
4. G. Tan, Y. H. Lee, T. Zhan, J. Yang, S. Liu, D. Zhao, and S. T. Wu, “Foveated imaging for near-eye displays,” Opt. Express 26, 25076–25085 (2018). [CrossRef]
5. G. Tan, Y. Huang, M. C. Li, S. L. Lee, and S. T. Wu, “High dynamic range liquid crystal displays with a mini-LED backlight,” Opt. Express 26, 16572–16584 (2018). [CrossRef]
6. T. Zhan, Y. H. Lee, and S. T. Wu, “High-resolution additive light field near-eye display by switchable Pancharatnam-Berry phase lenses,” Opt. Express 26, 4863–4872 (2018). [CrossRef]
7. K. Hong, J. Yeom, C. Jang, J. Hong, and B. Lee, “Full-color lens-array holographic optical element for three-dimensional optical see-through augmented reality,” Opt. Lett. 39, 127–130 (2014). [CrossRef]
8. S. Xie, P. Wang, X. Sang, and C. Li, “Augmented reality three-dimensional display with light field fusion,” Opt. Express 24, 11483–11494 (2016). [CrossRef]
9. J. Wang, X. Xiao, H. Hua, and B. Javidi, “Augmented reality displays with micro integral imaging,” J. Disp. Technol. 11, 889–893 (2015). [CrossRef]
10. Y. H. Lee, G. Tan, K. Yin, T. Zhan, and S. T. Wu, “Compact see-through near-eye display with depth adaption,” J. Soc. Inf. Disp. 26, 64–70 (2018). [CrossRef]
11. K. Okuyama, T. Nakahara, Y. Numata, T. Nakamura, M. Mizuno, H. Sugiyama, S. Nomura, S. Takeuchi, Y. Oue, H. Kato, S. Ito, A. Hasegawa, T. Ozaki, M. Douyou, T. Imai, K. Takizawa, and S. Matsushima, “Late—news paper: highly transparent LCD using new scattering-type liquid crystal with field sequential color edge light,” SID Int. Symp. Dig. Tech. Pap. 48, 1166–1169 (2017). [CrossRef]
12. Y. T. Liu, K. Y. Liao, C. L. Lin, and Y. L. Li, “Invited paper: pixeLED display for transparent applications,” SID Int. Symp. Dig. Tech. Pap. 49, 874–875 (2018). [CrossRef]
13. H. Bellini, W. Chen, M. Sugiyama, M. Shin, S. Alam, and D. Takayama, Profiles in Innovation of Virtual & Augmented Reality (Goldman Sachs, 2016).
14. J. Liu, Q. Gao, and J. Han, “Compact monocular 3D near-eye display,” in Imaging and Applied Optics 2017, OSA Technical Digest (Optical Society of America, 2017), paper DTu4F.2.
15. B. C. Kress and W. J. Cummings, “11-1: Invited paper: towards the ultimate mixed reality experience: Hololens display architecture choices,” SID Int. Symp. Dig. Tech. Pap. 48, 127–131 (2017). [CrossRef]
16. P. Daugherty, “Extended reality summary,” https://www.accenture.com/us-en/insight-xr-extended-reality.
17. Qualcomm Technologies Inc., “The mobile future of extended reality (XR),” https://www.qualcomm.com/media/documents/files/the-mobile-future-of-extended-reality-xr.pdf.
18. T. North, M. Wagner, S. Bourquin, and L. Kilcher, “Compact and high-brightness helmet-mounted head-up display system by retinal laser projection,” J. Disp. Technol. 12, 982–985 (2016). [CrossRef]
19. Y. Wang, W. Liu, X. Meng, H. Fu, D. Zhang, Y. Kang, R. Feng, Z. Wei, X. Zhu, and G. Jiang, “Development of an immersive virtual reality head-mounted display with high performance,” Appl. Opt. 55, 6969–6977 (2016). [CrossRef]
20. T. Yoneyama, C. Yang, Y. Sakamoto, and F. Okuyama, “Eyepiece-type full-color electro-holographic binocular display with see-through vision,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (Optical Society of America, 2013), paper DW2A.11.
21. D. Kersten and G. E. Legge, “Convergence accommodation,” J. Opt. Soc. Am. 73, 332–338 (1983). [CrossRef]
22. E. Peli, “Real vision & virtual reality,” Opt. Photon. News 6(7), 28–34 (1995). [CrossRef]
23. G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” ACM Trans. Graph. 30, 1–10 (2011). [CrossRef]
24. L. Liu, Z. Pang, and D. Teng, “Super multi-view three-dimensional display technique for portable devices,” Opt. Express 24, 4421–4430 (2016). [CrossRef]
25. D. Sun, C. Wang, D. Teng, and L. Liu, “Three-dimensional display on computer screen free from accommodation-convergence conflict,” Opt. Commun. 390, 36–40 (2017). [CrossRef]
26. X. Yu, X. Sang, X. Gao, Z. Chen, D. Chen, W. Duan, B. Yan, C. Yu, and D. Xu, “Large viewing angle three-dimensional display with smooth motion parallax and accurate depth cues,” Opt. Express 23, 25950–25958 (2015). [CrossRef]
27. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495, 348–351 (2013). [CrossRef]
28. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cue,” ACM Trans. Graph. 34, 1–12 (2015). [CrossRef]
29. C. Yao, D. Cheng, T. Yang, and Y. Wang, “Design of an optical see-through light-field near-eye display using a discrete lenslet array,” Opt. Express 26, 18292–18301 (2018). [CrossRef]
30. H. Huang and H. Hua, “High-performance integral-imaging-based light field augmented reality display using freeform optics,” Opt. Express 26, 17578–17590 (2018). [CrossRef]
31. L. Wei, Y. Li, J. Jing, L. Feng, and J. Zhou, “Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface,” Opt. Express 26, 8550–8565 (2018). [CrossRef]
32. X. Shen and B. Javidi, “Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens,” Appl. Opt. 57, B184–B189 (2018). [CrossRef]
33. H. Hua and B. Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express 22, 13484–13491 (2014). [CrossRef]
34. K. Guttag, “Magic Leap review part 2- image issue,” https://www.kguttag.com/2018/10/01/magic-leap-review-part-2-image-issues/.
35. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays: realization of augmented reality with holographic optical elements,” ACM Trans. Graph. 35, 1–13 (2016).
36. C. Jang, K. Bang, S. Moon, J. Kim, S. Lee, and B. Lee, “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph. 36, 1–13 (2017). [CrossRef]
37. J. P. Goodman, Introduction to Fourier Optics (W. H. Freeman, 2017).
38. A. Kozma and D. L. Kelly, “Spatial filtering for detection of signals submerged in noise,” Appl. Opt. 4, 387–392 (1965). [CrossRef]
39. Z. Chen, X. Sang, Q. Lin, J. Li, X. Yu, X. Gao, B. Yan, C. Yu, W. Dou, and L. Xiao, “Acceleration for computer-generated hologram in head-mounted display with effective diffraction area recording method for eyes,” Chin. Opt. Lett. 14, 080901 (2016). [CrossRef]
40. G. Li, D. Lee, Y. Jeong, J. Cho, and B. Lee, “Holographic display for see-through augmented reality using mirror-lens holographic optical element,” Opt. Lett. 41, 2486–2489 (2016). [CrossRef]
41. Y. Z. Liu, X. N. Pang, S. Jiang, and J. W. Dong, “Viewing-angle enlargement in holographic augmented reality using time division and spatial tiling,” Opt. Express 21, 12068–12076 (2013). [CrossRef]
42. J. Park and S. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26, 27076–27088 (2018). [CrossRef]
43. P. Zhou, Y. Li, S. Liu, and Y. Su, “Compact design for optical-see-through holographic displays employing holographic optical elements,” Opt. Express 26, 22866–22876 (2018). [CrossRef]
44. E. Murakami, Y. Oguro, and Y. Sakamoto, “Study on compact head-mounted display system using electro-holography for augmented reality,” IEICE Trans. Electron. E100-C, 965–971 (2017). [CrossRef]
45. J. S. Lee, Y. K. Kim, and Y. H. Won, “See-through display combined with holographic display and Maxwellian display using switchable holographic optical element based on liquid lens,” Opt. Express 26, 19341–19355 (2018). [CrossRef]
46. Z. Zeng, H. Zheng, Y. Yu, A. K. Asundi, and S. Valyukh, “Full-color holographic display with increased-viewing-angle [Invited],” Appl. Opt. 56, 112–120 (2017). [CrossRef]
47. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22, 6526–6534 (2014). [CrossRef]
48. J.-S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23, 18143–18155 (2015). [CrossRef]
49. Q. Gao, J. Liu, X. Duan, T. Zhao, X. Li, and P. Liu, “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25, 8412–8424 (2017). [CrossRef]
50. A. Maimone, A. Georgiou, and J. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36, 8501–8516 (2017). [CrossRef]
51. P. Sun, S. Chang, S. Liu, X. Tao, C. Wang, and Z. Zheng, “Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm,” Opt. Express 26, 10140–10151 (2018). [CrossRef]
52. J. Y. Hong, G. Li, and B. Lee, “Holographic see-through near-eye display using index-matched anisotropic crystal lens,” in Imaging and Applied Optics 2018, OSA Technical Digest (Optical Society of America, 2018), paper DTu2F.3.
53. J. Hong, Y. Kim, S. Hong, C. Shin, and H. Kang, “Near-eye foveated holographic display,” in Imaging and Applied Optics, OSA Technical Digest (Optical Society of America, 2018), paper 3M2G.4.
54. M. Yamaguchi, “Light-field and holographic three-dimensional displays [Invited],” J. Opt. Soc. Am. A 33, 2348–2364 (2016). [CrossRef]
55. C. Chang, J. Xia, L. Yang, W. Lei, Z. Yang, and J. Chen, “Speckle-suppressed phase-only holographic three-dimensional display based on double-constraint Gerchberg-Saxton algorithm,” Appl. Opt. 54, 6994–7001 (2015). [CrossRef]
56. Y. Qi, C. Chang, and J. Xia, “Speckleless holographic display by complex modulation based on double-phase method,” Opt. Express 24, 30368–30378 (2016). [CrossRef]
57. T. Utsugi and M. Yamaguchi, “Speckle-suppression in hologram calculation using ray-sampling plane,” Opt. Express 22, 17193–17206 (2014). [CrossRef]
58. J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. 9, 405–407 (1966). [CrossRef]
59. P. Su, W. Cao, J. Ma, B. Cheng, X. Liang, L. Cao, and G. Jin, “Fast computer-generated hologram generation method for three-dimensional point cloud model,” J. Disp. Technol. 12, 1688–1694 (2016). [CrossRef]
60. A. Symeonidou, D. Blinder, and P. Schelkens, “Colour computer-generated holography for point clouds utilizing the Phong illumination model,” Opt. Express 26, 10282–10298 (2018). [CrossRef]
61. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electronic Imaging 2, 28–34 (1993). [CrossRef]
62. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34, 3133–3135 (2009). [CrossRef]
63. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18, 19504–19509 (2010). [CrossRef]
64. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23, 9852–9857 (2015). [CrossRef]
65. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47, D55–D62 (2008). [CrossRef]
66. S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20, 12021–12034 (2012). [CrossRef]
67. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25, 112–123 (2017). [CrossRef]
68. T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18, 9955–9960 (2010). [CrossRef]
69. Y. Ichihashi, H. Nakayama, T. Ito, N. Masuda, T. Shimobaba, A. Shiraki, and T. Sugie, “HORN-6 special-purpose clustered computing system for electroholography,” Opt. Express 17, 13895–13903 (2009). [CrossRef]
70. S. Nishi, K. Shiba, K. Mori, S. Nakayama, and S. Murashima, “Fast calculation of computer-generated Fresnel hologram utilizing distributed parallel processing and array operation,” Opt. Rev. 12, 287–292 (2005). [CrossRef]
71. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]
72. J. P. Liu and H. K. Liao, “Fast occlusion processing for a polygon-based computer-generated hologram using the slice-by-slice silhouette method,” Appl. Opt. 57, A215–A221 (2018). [CrossRef]
73. Z. Lu and Y. Sakamoto, “Holographic display methods for volume data: polygon-based and MIP-based methods,” Appl. Opt. 57, A142–A149 (2018). [CrossRef]
74. Y. Zhao, K. C. Kwon, Y. Piao, S. H. Jeon, and N. Kim, “Depth-layer weighted prediction method for a full-color polygon-based holographic system with real objects,” Opt. Lett. 42, 2599–2602 (2017). [CrossRef]
75. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52, A290–A299 (2013). [CrossRef]
76. H. Kim, J. Kwon, and J. Hahn, “Accelerated synthesis of wide-viewing angle polygon computer-generated holograms using the interocular affine similarity of three-dimensional scenes,” Opt. Express 26, 16853–16874 (2018). [CrossRef]
77. X. N. Pang, D. C. Chen, Y. C. Ding, Y. G. Chen, S. J. Jiang, and J. W. Dong, “Image quality improvement of polygon computer generated holography,” Opt. Express 23, 19066–19073 (2015). [CrossRef]
78. S. Trester, “Computer-simulated Fresnel holography,” Eur. J. Phys. 21, 317–331 (2000). [CrossRef]
79. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional Fourier spectra of real existing objects,” Opt. Lett. 28, 2518–2520 (2003). [CrossRef]
80. M. Bayraktar and M. Özcan, “Method to calculate the far field of three-dimensional objects for computer-generated holography,” Appl. Opt. 49, 4647–4654 (2010). [CrossRef]
81. J. S. Chen, D. Chu, and Q. Y. Smithwick, “Rapid hologram generation utilizing layer-based approach and graphic rendering for realistic three-dimensional image reconstruction by angular tiling,” J. Electronic Imaging 23, 023016 (2014). [CrossRef]
82. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23, 25440–25449 (2015). [CrossRef]
83. R. P. Muffoletto, J. M. Tyler, and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15, 5631–5640 (2007). [CrossRef]
84. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram for RGB and depth images using wavefront recording plane method,” Photon. Lett. Pol. 6, 90–92 (2014).
85. F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. 29, 1668–1670 (2004). [CrossRef]
86. Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. 40, 2864–2870 (2001). [CrossRef]
87. N. T. Shaked and J. Rosen, “Modified Fresnel computer-generated hologram directly recorded by multiple-viewpoint projections,” Appl. Opt. 47, D21–D27 (2008). [CrossRef]
88. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16, 12372–12386 (2008). [CrossRef]
89. Z. M. A. Lum, X. Liang, Y. Pan, R. Zheng, and X. Xu, “Increasing pixel count of holograms for three-dimensional holographic display by optical scan-tiling,” Opt. Eng. 52, 015802 (2013). [CrossRef]
90. Y. Lim, K. Hong, H. Kim, H. E. Kim, E. Y. Chang, S. Lee, T. Kim, J. Nam, H. G. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24, 24999–25009 (2016). [CrossRef]
91. G. Li, J. Hong, D. Lee, J. Yeom, and B. Lee, “Viewing zone enlargement of holographic display using high order terms guided by holographic optical element,” in Imaging and Applied Optics 2015, OSA Technical Digest (Optical Society of America, 2015), paper JT5A.24.
92. H. Kim, K. Hong, Y. Lim, H. Choo, and J. Kim, “Continuous viewing window formation for 360-degree holographic display,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (Optical Society of America, 2017), paper W2A.22.
93. S. B. Ko and J. H. Park, “Speckle reduction using angular spectrum interleaving for triangular mesh based computer generated hologram,” Opt. Express 25, 29788–29797 (2017). [CrossRef]
94. S. Tay, P. A. Blanche, R. Voorakaranam, A. V. Tunç, W. Lin, S. Rokutanda, T. Gu, D. Flores, P. Wang, G. Li, P. St Hilaire, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “An updatable holographic three-dimensional display,” Nature 451, 694–698 (2008). [CrossRef]
95. D. E. Smalley, Q. Y. Smithwick, V. M. Bove, J. Barabas, and S. Jolly, “Anisotropic leaky-mode modulator for holographic video displays,” Nature 498, 313–317 (2013). [CrossRef]
96. X. Li, H. Ren, X. Chen, J. Liu, Q. Li, C. Li, G. Xue, J. Jia, L. Cao, A. Sahu, B. Hu, Y. Wang, G. Jin, and M. Gu, “Athermally photoreduced graphene oxides for three-dimensional holographic images,” Nat. Commun. 6, 6984 (2015). [CrossRef]
97. Q. Wang, E. T. F. Rogers, B. Gholipour, C. M. Wang, G. Yuan, J. Teng, and N. I. Zheludev, “Optically reconfigurable metasurfaces and photonic devices based on phase change materials,” Nat. Photonics 10, 60–65 (2016). [CrossRef]
98. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005). [CrossRef]