Abstract

Due to lack of the accommodation stimulus, an inherent drawback for the conventional glasses-free stereoscopic display is that precise depth cues for the human monocular vision is rent, which results in the well-known convergence-accommodation conflict for the human visual system. Here, a super multi-view light field display with the vertically-collimated programmable directional backlight (VC-PDB) and the light control module (LCM) is demonstrated. The VC-PDB and the LCM are used to form the super multi-view light field display with low crosstalk, which can provide precisely detectable accommodation depth for human monocular vision. Meanwhile, the VC-PDB cooperates with the refreshable liquid-crystal display panel to provide the convergence depth matching the accommodation depth. In addition, the proposed method of light field pick-up and reconstruction is implemented to ensure the perceived three dimensional (3D) images with accurate depth cues and correct geometric occlusion, and the eye tracker is used to enlarge the viewing angle of 3D images with smooth motion parallax. In the experiments, the reconstructed high quality fatigue-free 3D images can be perceived with the clear focus depth of 13 cm in the viewing angle of ± 20°, where 352 viewpoints with the viewpoint density of 1 mm−1 and the crosstalk of less than 6% are presented.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The real world is three dimensional (3D), however, the available 2D display technology can not meet the expectation to exhibit a natural and realistic 3D sense. The glasses-free autostereoscopic 3D display technology can display the depth perception and spatial position of the objects, which can produce the 3D image similar to how humans observe the real 3D scene. The development of the glasses-free stereoscopic display technology can be divided into many categories [1]. The conventional glasses-free autostereoscopic display technology exhibits the depth information of the 3D scene by rendering the binocular parallax. Unlike viewing objects in the real world, this type of display cannot stimulate the natural response of eye accommodation and retinal blurring effects due to the lack of ability to correctly render 3D scene depth cues. As for the low-density viewpoints, only one viewpoint can be provided to a single eye, as shown in Fig. 1(a). Therefore, the position of the object perceived by the monocular accommodation is on the display screen, while the position of the object focused by binocular convergence is on the 3D object in space. The different perceived depths lead to the well-known convergence-accommodation discrepancy problem, namely, convergence-accommodation conflict (CAC) [29]. Binoculus is constantly balanced between the two distances, and visual fatigue and visual artifacts occur after a certain period of time, which results in a deteriorating visual experience.

 

Fig. 1. (a) The conventional glasses-free stereoscopic display with low-density viewpoints. (b) The proposed light field display with high-density viewpoints.

Download Full Size | PPT Slide | PDF

Several display methods that are potentially capable of resolving the CAC have been demonstrated in recent years, e.g. holographic displays [10,11], multi-layer plane displays [12,13], multi-projection displays [14], volumetric displays [15,16], and light field displays [17,18]. The holographic displays can reconstruct the amplitude and phase information of the 3D object to achieve the 3D effect, which can perfectly solve the mismatch of convergence and accommodation. However, it is difficult to realize real-time, full-color, large-size 3D display due to the limitations of recording materials and techniques. Multi-layer plane displays can provide multiple viewpoints with different depth of focus to satisfy the response of the monocular accommodation. However, less parallax capacity results in a smaller viewing angle. Multi-projection displays can also solve the CAC by generating a compact dense viewpoints in the interval of pupil size. However, a serious of projectors that need to be calibrated are bound to reduce the compactness of the system, and subsequent calibration operations are very cumbersome. Volume displays can generate 3D images in space by optical scanning. However, they cannot provide a fully convincing 3D experience due to the limited color reproduction. Furthermore, the 3D image size is usually small, making it impossible to perform a large immersive 3D scene.

In order to alleviate the CAC, the accommodation response of human eye should be provided, and the simplest and quickest way is to provide multiple viewpoints information to a single eye [5,19]. Many researchers have done a lot of work in the multi-view display. Takaki et al., employed intensive projectors to generate 256 viewpoints with the viewpoint density of 0.76 mm−1 [14]. However, the number of calibrated projectors would reduce the compactness of the system and the 3D resolution of the prototype display was low. An improved super multi-view display was presented based on a lower resolution flat-panel [20], which could provide two or more viewpoints for each eye with the viewpoint density of 0.38 mm−1 and the clear depth of 80 mm. However, the display method reduced the pitch of the viewing zones, which decreased the horizontal width of the observation area. Recently, a super multi-view near eye display system that combined a high-speed spatial light modulator with a 2D light source array was demonstrated [21], and 21 viewpoints were generated with the viewpoint density of 0.5 mm−1 by using a time-multiplexing technique. However, the generation of color 3D images was not implemented. H. Kakeya proposed a full HD super multi-view display with a time-division multiplexing parallax barrier [22], which provided 18 viewpoints with the viewpoint density of 0.71 mm−1 and a head tracking system was used to increase the viewing angle. However, the freedom of head-mounted device was limited in the near eye display. Most importantly, the crosstalk between viewpoints was not taking into account. Our previous work demonstrated crosstalk-suppressed dense multi-view light field display by using micro-pinhole unit array and a vertically-collimated backlight, which can provide 3 viewpoints in each eye with the viewpoint density of 0.75 mm−1 [23]. However, it sacrifices over 90% brightness.

Here, a 15.6-inch horizontal-parallax only light field display prototype based on the vertically-collimated programmable directional backlight [24,25] and the vertically-diffused holographic functional screen is demonstrated to address the CAC. Four viewpoints are arranged for each eye to provide precise accommodation cues. Meanwhile, the vertically-collimated programmable directional backlight (VC-PDB) cooperates with the refreshing liquid-crystal display (LCD) panel to provide convergence stimuli for human binocular vision. Thus, the accurate accommodation depth consistent with convergence depth can be perceived. The reconstructed high quality fatigue-free 3D images can be perceived with the clear focus depth of 13 cm. 352 viewpoints with the viewpoint density of 1 mm−1 are presented by using the eye tracker in the viewing angle of ± 20°, and the VC-PDB cooperating with the light control module (LCM) is utilized to achieve crosstalk of less than 6% that reaches the typical level of the commercialized eyewear-assisted 3D display with the crosstalk of 9.6% [26]. In addition, the super multi-view pick-up and the multiple sub-units combined coding method are used to break the limitation of the tradeoff between the number of viewpoints and the resolution of the 3D image, and the eye tracker is used to enlarge the viewing angle of 3D images with smooth motion parallax.

2. Experimental configuration

2.1 Principle of the light field display system

The proposed super multi-view light field display system is composed of the VC-PDB, the LCD panel, the LCM and the eye tracker, which is shown in Fig. 2(a). The VC-PDB consists of a LED backlight array (LBA) and a linear Fresnel lens array (LFLA). The LCM consists of the narrow cylindrical lens array (NCLA) and a vertically-diffused holographic functional screen (VD-HFS). The light rays from the LED backlight array become approximately parallel after passing through the LFLA. The parallel rays pass through the LCD panel, where the synthetic image (SI) coded by multiple parallax images is displayed. Simultaneously, the content of the SI is changed in synchronization with the directional change of the light rays coming from the backlight. The proposed system can provide high dense viewpoints at a certain range of distance in space, and information of multiple viewpoints is simultaneously provided to a single eye for meeting the response of monocular accommodation, as shown in Fig. 2(b). Meanwhile, the adjacent LED columns in each backlight group are alternately lighted up at the different time with the time-sequential operation to provide binocular parallax. The VC-PDB cooperates with the refreshing LCD panel to provide the convergence depth matching the accommodation depth. Therefore, the conflict of accommodation and convergence can be eliminated. Each linear Fresnel lens and the corresponding covered M LED light columns constitute a basic backlight unit, and a viewing area composed of M sub-viewing area can be formed on the viewing plane. The viewing angle $\Omega $ of the proposed light field display is determined by the sub-viewing area where the VC-PDB can illuminate its corresponding SI, which can be deduced in the Eq. (1).

$$\Omega = 2M\arctan (\frac{P}{{2D}})$$
where P is the width of the sub-viewing area illuminated by a set of led lights, and D is the gap between the LCD panel and the viewing plane. The LBA coupled with LFLA directly in our proposed system are used to constrain the angle of the emergent lights to achieve precise light-guide. However, the backlight unit of the conventional LCD panel cannot control the angular distribution of the emergent lights. The NCLA is the refraction-based light-controlled optical component, which can provide the excellent light-controlled ability without the loss of light energy. Assuming that the pupil diameter and interpupillary distance of the viewer are 4 mm and 60 mm respectively, based on the NCLA with specific designed parameters, simulations of the radiation energy pattern of 8 viewpoints produced are carried out along the x-axis in the range of 8 mm, as shown in Fig. 2(c), which illustrates that 4 viewpoints can be seen within the range of 4 mm at the same time.

 

Fig. 2. The schematic diagram of the proposed super multi-view light field display system. (a) The system configurations of the display prototype. (b) The distribution of viewpoints for human eye. (c) The simulations of the radiation energy pattern of 8 viewpoint perspectives produced by the NCLA at viewing distance. (d)Diagram of the VD-HFS modulation light beams.

Download Full Size | PPT Slide | PDF

The VD-HFS is holographically printed with speckle patterns exposed on special sensitive material and the diffusion angle is determined by the shape and size of speckles through controlling the mask aperture [27,28], which can be used to re-modulate the light field distribution from the light-controlled optical components as if they were emitted from a real object [13]. The method of achieving the specific function is also called the directional laser speckle. When the laser is illuminating onto a diffusion plate of size a0 × b0, the speckle pattern is exposed on the photoresist plate behind the diffusion plate, which is recorded at a distance z0. By the ultraviolet curing and splicing method, the repeated speckle patterns consist the VD-HFS. When the speckle pattern is illuminated by a light wave, it diffuses and limits the light in a solid diffused angle ωhorizontal = a0/z0 and ωvertical = b0/z0. Through controlling the speckle pattern of the holographic functional screen, the exact diffused angle can be achieved. As shown in Fig. 2(d), it’s evident that the vertical diffused angle is much bigger than horizontal diffused angle, and the diffused light wave is scattered only along a direction perpendicular to the strip speckles. The eye tracker is driven by a novel pupil-tracking algorithm cooperating with a personal computer to real-time capture the position of human eye with high accuracy. Therefore, the observer can experience the glasses-free autostereoscopic effect with comfort for a long time.

2.2 The light field pick-up and reconstruction

To perceive a naturally smooth 3D scene, the super multi-view information is collected and recorded. The virtual camera array (CA) with off-axis pick-up is utilized to digitally sample the relative direction and intensity of the target virtual 3D objects. The cameras of the CA are divided into M group, and each group contains N cameras in different positions, as shown in Fig. 3(a). The larger the number of viewpoints per unit area has, the greater the amount of spatial information is contained. Namely, more spatial information can enter the human eye to render the correct natural depth information. The mathematical description of the viewpoint density of the proposed super multi-view light field display can be deduced in the Eq. (2).

$${V_\rho } = \frac{{gN}}{{bD}}$$
where b is the pitch of the NCLA, D is the viewing distance away from the LCD panel, g is the gap between the NCLA and the LCD panel, and N is the number of the viewpoints in each group. The direction information of the light ray bundle emitting from the arbitrary spatial point on the object for the viewer’s eye can be accurately collected and recorded with each corresponding image with the proposed super multi-view light field pick-up.

 

Fig. 3. The schematic diagram of the proposed light field pick-up and reconstruction. (a) The retina-based light field pick-up method. (b) The novel image mapping method based on backward ray-tracing for light field reconstruction.

Download Full Size | PPT Slide | PDF

The image coding method for dense viewpoints based on the specific structural parameters and backward ray-tracing technology is accurate and efficient to render SI, which is used to ensure natural depth cues and correct occlusion relationship. The origin of the coordinates is established at the leftmost end of the viewing area. Positions of two eyes are real-time detected by the eye tracker device to render and load SI synthesized by a set of parallax images. The corresponding serial number of SI and backlight are determined by the Eq. (3).

$$\left[ {\begin{array}{{c}} {{m_{right}}}\\ {{m_{left}}} \end{array}} \right] = \left[ {\begin{array}{{cc}} 1&0\\ 0&1 \end{array}} \right]\left[ {\begin{array}{{c}} {floor[\frac{{{T_{right}}}}{{(N - 1) \cdot \Delta m}}]}\\ {floor[\frac{{{T_{left}}}}{{(N - 1) \cdot \Delta m}}]} \end{array}} \right] + \left[ {\begin{array}{{c}} 1\\ 1 \end{array}} \right]$$
where Tright and Tleft are the positions of the right and left eye detected by eye tracker device in real-time, respectively. mright and mleft are the serial numbers of the SI presented into the right and left eyes, respectively. Δm represents the gap of adjacent cameras.

The SI based on coding method with multiple sub-units can generate dense viewpoints. Different from the traditional coding method, different pitches of lenticular lens array distribute the covered sub-pixels into the same viewing area, and the viewpoints formed by different lenses are overlapped accurately. Here, the covered sub-pixels are arranged from different relative positions of corresponding sub-unit, and multiple different sub-units are combined to form a display unit based on a specific structure, as shown in Fig. 3(b), where multiple viewpoints interpolation is performed in the original viewpoint area to realize the side-by-side arrangement of the dense viewpoints. Multiplexing of multiple sub-units is achieved by simultaneously viewing different viewpoints from different sub-units. The resolution of the viewed images isn’t reduced too much as the number of viewpoints is increased. Each sub-unit is composed of $a \times b$ sub-pixels ($a$ represents the number of rows and b represents the number of columns.), and c sub-units are combined to form a display unit with $a \times b \times c$ viewpoints. Since there is relative position deviation for the sub-pixels in different sub-units, the directed viewpoints are formed at different location in the horizontal direction. The ith row and jth column pixel in SI is described as Om(i, j), and the ith row and jth column pixel in the parallax sequence image picked up be the mth group and lth camera in CA is expressed as Pm,l(i, j) (where m = 1, 2,…, M and l = 1, 2,…, N). As illustrated, the mathematical description of mapping relationships can be expressed in Eq. (4).

$${O_m}(i,j) = {P_{m,l}}(i,j)$$
where
$$\left[ {\begin{array}{{c}} l\\ m \end{array}} \right] = \left[ {\begin{array}{{cc}} 1&{ - 1}\\ 0&0 \end{array}} \right]\left[ {\begin{array}{{c}} N\\ {floor\left( {\frac{{d - floor({d \mathord{\left/ {\vphantom {d b}} \right.} b})}}{{{b \mathord{\left/ {\vphantom {b N}} \right.} N}}}} \right)} \end{array}} \right] + \left[ {\begin{array}{{cc}} 0&0\\ 0&1 \end{array}} \right]\left[ {\begin{array}{{c}} 1\\ m \end{array}} \right]$$
with
$$\left\{ \begin{array}{l} d = 3 \times j + 3 \times i \times \tan \theta + k\\ N = a \times b \times c \end{array} \right.$$
where $\theta$ is the tilt angle of the NCLA, k is a variable representing the number of RGB channels. Finally, the SI loaded on the LCD panel projects the viewpoints separately from each other to the VD-HFS for viewpoint fusion.

2.3 Utilization of the LCM and the VC-PDB for crosstalk suppression

The crosstalk between high dense viewpoints will also occur with the NCLA, although there is an excellent light-controlled ability for the NCLA in the horizontal direction. If the conventional stray backlight is used, the 3D effect will deteriorate. The backlight is generally stray light in the conventional autostereoscopic display, especially with the LED backlight. The vertical direction of the crosstalk between viewpoints is introduced since the direction of light propagation cannot be controlled. To simplify analysis of the crosstalk caused by the stray backlight, the operation principle of SI is given by 2 sub-units (where a = 2, b = 3, c = 2, 12 viewpoints). The light ray bundles generated by the stray backlight improperly emit out from the lenses at different heights with incorrect propagating directions are marked with the red light color in Fig. 4(a), and the light ray bundles with correct propagating directions are marked with the gray color. Therefore, the sequence of viewpoints with different arrangements are generated and projected on to the VD-HFS. The correct viewpoints and the crosstalk viewpoints are widened to a larger height along the vertical direction by the modulation of the VD-HFS at the viewing window, which results in a high crosstalk between viewing zones of different viewpoints.

 

Fig. 4. (a) The schematic diagram of the vertical crosstalk between different viewpoints with the use of the conventional stray backlight and the LCM. (b) The schematic diagram of low crosstalk between different viewpoints with the use of the VC-PDB and the LCM.

Download Full Size | PPT Slide | PDF

Since the stray backlight leads to the erroneous propagation of light bundles, the VC-PDB is used to reduce the crosstalk, as illustrated in Fig. 1(a) and Fig. 4(b). Each linear Fresnel lens covers M LED light columns. The midline of each LED light column directs toward the center of the corresponding linear Fresnel lens, and the distance is set to the focal length of the LFLA, which ensures that the light bundles passed through LFLA are collimated into parallel beams. The VC-PDB can ensure that light bundles passed through the sub-pixels are addressed with the corresponding NCLA in the same way with the correct propagating directions. As a result, the crosstalk is greatly suppressed. According to the geometrical relationship, the threshold of the vertical divergence angle θvertical and the horizontal divergence angle θhorizontal of VC-PDB is given in Eq. (7).

$$\left\{{\begin{array}{{c}} {{\theta_{vertical}} = 2\arctan \frac{{{W_p}}}{{2d}}}\\ {{\theta_{horizontal}} = 2\arctan \frac{{{W_{lf}}}}{{2d}}} \end{array}} \right.$$
where Wp is the height of sub-pixel of the LCD panel, and Wlf is the width of each linear Fresnel lens. d is the distance between the LBA and the LFLA, which is equal to the focal length of LFLA.

The light ray bundles generated by the VC-PDB can ensure correct propagating directions, which are marked with the gray color in Fig. 4(b). Dense viewpoints are projected onto the VD-HFS by the precise refraction of the light rays from the NCLA. Since the position of each viewpoint in the unit is different, the crosstalk between viewpoints is greatly reduced in the vertical direction after the fusion and diffusion process of the holographic function screen. Compared with Fig. 4(a), it can be found that the overlap area between viewpoints is greatly reduced.

3. Experimental results and analysis

To verify the feasibility of the proposed method, relevant experiments and analyses are carried out. Here, the prototype with the clear focus depth of 13 cm is demonstrated. Eleven viewing zones can be generated by the eye tracker cooperating with the refreshing LCD panel, and each viewing zone contains 32 parallax images (where a = 2, b = 5.333, c = 3, 32 viewpoints). Therefore, the total number of viewpoints can be calculated as 11×32 = 352. Finally, 352 viewpoints with the viewpoint density of 1 mm−1 are presented in the viewing angle of ± 20°. The refresh rate of the LCD panel used in the experiment is 60 Hz. Therefore, the refresh rate of the prototype is 30 Hz for single eye. The configuration of the display prototype is listed in Table 1.

Tables Icon

Table 1. The specific parameters of the optical system

Compared with the implement method of the light field display with the conventional stray backlight, the inter viewpoints aliasing phenomenon of the demonstrated light field display based on the VC-PDB is reduced. Actually, the displayed 3D image acquired by the human eyes is composed of a series of sub-pixels observed through each sub-unit. Therefore, the simulation results are obtained by calculating the displayed images for human eyes. In order to evaluate the image quality of the reconstructed 3D scene, the simulation method is used to obtain different viewpoint perspectives for two kinds of light field system. The structural similarity index measure (SSIM) of the viewpoint perspectives with different angles is calculated. As shown in Fig. 5, the SSIM value of the light display with the VC-PDB is obviously higher than that with the conventional stray backlight, which illustrates that the light display with the VC-PDB has high similarity. Figure 6 shows the displayed 3D images of the light field display with the VC-PDB and the light field display with the conventional stray backlight. The experimental results illustrate that the proposed light field display well facilitates the 3D imaging quality.

 

Fig. 5. Simulation of SSIM for different perspectives of the 3D scene. (a) Perspectives captured from different angles. (b) Simulation results of the light field display with the conventional stray backlight. (c) SSIM of the light field display with the conventional stray backlight. (d) Simulation results of the light field display with the VC-PDB. (e) SSIM of the light field display with the VC-PDB.

Download Full Size | PPT Slide | PDF

 

Fig. 6. Comparison of the displayed 3D effects produced by two backlight methods with 32 viewpoints. (a) The 3D effect produced by the light field display with the conventional stray backlight and LCM. (b) The 3D effect produced by the proposed light field display.

Download Full Size | PPT Slide | PDF

The formation of the viewing area is evaluated by measuring luminance distribution in the viewing area. The luminance distribution of eight viewpoints in the specific viewing area are measured by a CCD camera with the focal length of 35 mm placed at the viewing distance, as shown in Fig. 7(a). From the results depicted in Fig. 7(b), crosstalk of 5.12%, 3.54%, 5.63%, 4.42%, 4.93%, 2.75%, 4.56%, 5.05% are the statistics measured at the center of the viewing plane. Those results state an evident fact that the crosstalk of the developed prototype is under the level of the commercialized eyewear-assisted 3D display.

 

Fig. 7. Illustration of the proposed system for crosstalk measurements. (a) Measuring crosstalk configuration and diagram. (b) The luminance and crosstalk distributions for 8 viewpoints at the viewing plane.

Download Full Size | PPT Slide | PDF

One of the prominent applications of the light field display is for exhibition and appreciation of cultural relics, as shown in Fig. 8. The parallax images of glaze horses are coded and loaded on the LCD panel. A CCD camera is used to capture the 3D images with different angles and different depth planes, which always faces to the center of the screen. As shown in Fig. 8(b), when the focal plane is on the different glaze horses, the imaging is clear and the others are blurred. The glaze horses are captured with multiple angles in Fig. 8(c) and Visualization 1 when the camera focuses on the LCD panel. The experimental results clearly demonstrate the feasibility that the proposed light field display prototype, which can provide the accommodation stimuli with precise depth cues and the smooth motion parallax with the correct geometric occlusion.

 

Fig. 8. Experimental results for glaze horses based on the proposed light field display prototype. (a) The arrangement of the experimental target scene. (b) The experimental results are captured at the angle of 0° when the camera focuses on the front, middle, and rear position, respectively. (c)The glaze horses are captured with multiple angles when the camera focuses on the LCD panel (see Visualization 1).

Download Full Size | PPT Slide | PDF

4. Conclusion

The conventional glasses-free stereoscopic display can cause the CAC due to the inability to provide the monocular accommodation stimuli. Here, a low-crosstalk super multi-view light field display with natural depth cues and smooth motion parallax is demonstrated. High dense viewpoints generated with the novel method of light field pick-up and reconstruction are utilized to provide the monocular accommodation stimuli, and the VC-PDB cooperates with the refreshing LCD panel to provide binocular parallax. In addition, the eye tracker is utilized to track the position of the human eye to provide smooth motion parallax and increase the viewing angle of reconstructed 3D scene. Our proposed light field display prototype can provide clear depth cues and smooth motion parallax in comparison to the conventional glasses-free autostereoscopic display. In our experiments, the reconstructed high quality fatigue-free 3D light field image with the clear displayed depth of 13 cm can be perceived with the right geometric occlusion and smooth parallax in the viewing angle of ± 20°, where 352 viewpoints with the viewpoint density of 1 mm−1 and the crosstalk of less than 6% are presented. We believe that the super multi-view and low-crosstalk light field display prototype can promote the application to other fields, such as military, medical education, biomedical and commercial exhibition.

Funding

National Key Research and Development Program (2017YFB1002900); National Science Foundation of China (NSFC) (61575025); Fund of the State Key Laboratory of Information Phonics and Optical Communications; Fundamental Research Funds for the Central Universities (2019PTB-018); Fundamental Research Funds for the Central Universities (2019RC13).

Disclosures

The authors declare no conflicts of interest. This work is original and has not been published elsewhere.

References

1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). [CrossRef]  

2. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008). [CrossRef]  

3. A. Stern, Y. Yitzhaky, and B. Javidi, “Perceivable light fields: matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE 102(10), 1571–1587 (2014). [CrossRef]  

4. R. Ohara, M. Kurita, T. Yoneyama, F. Okuyama, and Y. Sakamoto, “Response of accommodation and vergence to electro-holographic images,” Appl. Opt. 54(4), 615–621 (2015). [CrossRef]  

5. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006). [CrossRef]  

6. H. Huang and H. Hua, “effect of ray position sampling on the visual responses of 3D light field displays,” Opt. Express 27(7), 9343–9360 (2019). [CrossRef]  

7. P. Y. Chou, J. Y. Wu, S. H. Huang, C. P. Wang, Z. Qin, C.-T. Huang, P. Y. Hsieh, H. H. Lee, T. H. Lin, and Y. P. Huang, “Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement,” Opt. Express 27(2), 1164–1177 (2019). [CrossRef]  

8. H. Huang and H. Hua, “Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays,” Opt. Express 27(18), 25154–25171 (2019). [CrossRef]  

9. Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019). [CrossRef]  

10. J. Y. Son, C. H. Lee, O. O. Chernyshov, B. R. Lee, and S.-K. Kim, “A floating type holographic display,” Opt. Express 21(17), 20441–20451 (2013). [CrossRef]  

11. S. F. Lin and E. S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017). [CrossRef]  

12. S. Liu and H. Hua, “A systematic method for designing depth-fused multifocal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]  

13. G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012). [CrossRef]  

14. T. Yasuhiro and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]  

15. D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018). [CrossRef]  

16. D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018). [CrossRef]  

17. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018). [CrossRef]  

18. X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014). [CrossRef]  

19. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017). [CrossRef]  

20. T. Yasuhiro, T. Kosuke, and N. Junya, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011). [CrossRef]  

21. T. Ueno and T. Yasuhiro, “Super multi-view near-eye display to solve vergence-accommodation conflict,” Opt. Express 26(23), 30703–30715 (2018). [CrossRef]  

22. H. Kakeya, “A Full-HD Super-Multiview Display with Time-Division Multiplexing Parallax Barrier,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 49(1), 259–262 (2018). [CrossRef]  

23. L. Yang, X. Sang, X. Yu, B. Liu, B. Yan, K. Wang, and C. Yu, “A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction,” Opt. Express 26(26), 34412–34427 (2018). [CrossRef]  

24. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013). [CrossRef]  

25. J. He, Q. Zhang, J. Wang, J. Zhou, and H. Liang, “Investigation on quantitative uniformity evaluation for directional backlight autostereoscopic displays,” Opt. Express 26(8), 9398–9408 (2018). [CrossRef]  

26. Y. C. Chang, C. Y. Ma, and Y. P. Huang, “Crosstalk suppression by image processing in 3D display,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 41(1), 124–127 (2010). [CrossRef]  

27. C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010). [CrossRef]  

28. X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
    [Crossref]
  2. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
    [Crossref]
  3. A. Stern, Y. Yitzhaky, and B. Javidi, “Perceivable light fields: matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE 102(10), 1571–1587 (2014).
    [Crossref]
  4. R. Ohara, M. Kurita, T. Yoneyama, F. Okuyama, and Y. Sakamoto, “Response of accommodation and vergence to electro-holographic images,” Appl. Opt. 54(4), 615–621 (2015).
    [Crossref]
  5. Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
    [Crossref]
  6. H. Huang and H. Hua, “effect of ray position sampling on the visual responses of 3D light field displays,” Opt. Express 27(7), 9343–9360 (2019).
    [Crossref]
  7. P. Y. Chou, J. Y. Wu, S. H. Huang, C. P. Wang, Z. Qin, C.-T. Huang, P. Y. Hsieh, H. H. Lee, T. H. Lin, and Y. P. Huang, “Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement,” Opt. Express 27(2), 1164–1177 (2019).
    [Crossref]
  8. H. Huang and H. Hua, “Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays,” Opt. Express 27(18), 25154–25171 (2019).
    [Crossref]
  9. Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
    [Crossref]
  10. J. Y. Son, C. H. Lee, O. O. Chernyshov, B. R. Lee, and S.-K. Kim, “A floating type holographic display,” Opt. Express 21(17), 20441–20451 (2013).
    [Crossref]
  11. S. F. Lin and E. S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017).
    [Crossref]
  12. S. Liu and H. Hua, “A systematic method for designing depth-fused multifocal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
    [Crossref]
  13. G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012).
    [Crossref]
  14. T. Yasuhiro and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
    [Crossref]
  15. D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
    [Crossref]
  16. D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018).
    [Crossref]
  17. X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018).
    [Crossref]
  18. X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014).
    [Crossref]
  19. H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
    [Crossref]
  20. T. Yasuhiro, T. Kosuke, and N. Junya, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
    [Crossref]
  21. T. Ueno and T. Yasuhiro, “Super multi-view near-eye display to solve vergence-accommodation conflict,” Opt. Express 26(23), 30703–30715 (2018).
    [Crossref]
  22. H. Kakeya, “A Full-HD Super-Multiview Display with Time-Division Multiplexing Parallax Barrier,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 49(1), 259–262 (2018).
    [Crossref]
  23. L. Yang, X. Sang, X. Yu, B. Liu, B. Yan, K. Wang, and C. Yu, “A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction,” Opt. Express 26(26), 34412–34427 (2018).
    [Crossref]
  24. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
    [Crossref]
  25. J. He, Q. Zhang, J. Wang, J. Zhou, and H. Liang, “Investigation on quantitative uniformity evaluation for directional backlight autostereoscopic displays,” Opt. Express 26(8), 9398–9408 (2018).
    [Crossref]
  26. Y. C. Chang, C. Y. Ma, and Y. P. Huang, “Crosstalk suppression by image processing in 3D display,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 41(1), 124–127 (2010).
    [Crossref]
  27. C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010).
    [Crossref]
  28. X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011).
    [Crossref]

2019 (4)

2018 (7)

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018).
[Crossref]

X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018).
[Crossref]

T. Ueno and T. Yasuhiro, “Super multi-view near-eye display to solve vergence-accommodation conflict,” Opt. Express 26(23), 30703–30715 (2018).
[Crossref]

H. Kakeya, “A Full-HD Super-Multiview Display with Time-Division Multiplexing Parallax Barrier,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 49(1), 259–262 (2018).
[Crossref]

L. Yang, X. Sang, X. Yu, B. Liu, B. Yan, K. Wang, and C. Yu, “A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction,” Opt. Express 26(26), 34412–34427 (2018).
[Crossref]

J. He, Q. Zhang, J. Wang, J. Zhou, and H. Liang, “Investigation on quantitative uniformity evaluation for directional backlight autostereoscopic displays,” Opt. Express 26(8), 9398–9408 (2018).
[Crossref]

2017 (2)

2015 (1)

2014 (2)

X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014).
[Crossref]

A. Stern, Y. Yitzhaky, and B. Javidi, “Perceivable light fields: matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE 102(10), 1571–1587 (2014).
[Crossref]

2013 (3)

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref]

J. Y. Son, C. H. Lee, O. O. Chernyshov, B. R. Lee, and S.-K. Kim, “A floating type holographic display,” Opt. Express 21(17), 20441–20451 (2013).
[Crossref]

2012 (1)

G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012).
[Crossref]

2011 (2)

T. Yasuhiro, T. Kosuke, and N. Junya, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
[Crossref]

X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011).
[Crossref]

2010 (4)

2008 (1)

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref]

2006 (1)

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Akeley, K.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref]

Balram, N.

Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
[Crossref]

Banks, M. S.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref]

Beausoleil, G.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Brug, J.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Chang, Y. C.

Y. C. Chang, C. Y. Ma, and Y. P. Huang, “Crosstalk suppression by image processing in 3D display,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 41(1), 124–127 (2010).
[Crossref]

Chen, Y. T.

Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
[Crossref]

Chernyshov, O. O.

Choi, S.

X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011).
[Crossref]

C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010).
[Crossref]

Chou, P. Y.

Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
[Crossref]

P. Y. Chou, J. Y. Wu, S. H. Huang, C. P. Wang, Z. Qin, C.-T. Huang, P. Y. Hsieh, H. H. Lee, T. H. Lin, and Y. P. Huang, “Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement,” Opt. Express 27(2), 1164–1177 (2019).
[Crossref]

Costner, K.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Dou, W.

X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011).
[Crossref]

Fan, F. C.

X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011).
[Crossref]

C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010).
[Crossref]

Fattal, D.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Fiorentino, M.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Gao, H.

D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018).
[Crossref]

Gao, X.

Geng, J.

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref]

Girshick, A. R.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref]

Gneiting, S.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Goodsell, J.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Haymore, B.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

He, J.

Heidrich, W.

G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012).
[Crossref]

Hirsch, M.

G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012).
[Crossref]

Hoffman, D. M.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref]

Hsieh, P. Y.

Hua, H.

Huang, C. T.

Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
[Crossref]

Huang, C.-T.

Huang, H.

Huang, S. H.

Huang, Y. P.

P. Y. Chou, J. Y. Wu, S. H. Huang, C. P. Wang, Z. Qin, C.-T. Huang, P. Y. Hsieh, H. H. Lee, T. H. Lin, and Y. P. Huang, “Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement,” Opt. Express 27(2), 1164–1177 (2019).
[Crossref]

Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
[Crossref]

Y. C. Chang, C. Y. Ma, and Y. P. Huang, “Crosstalk suppression by image processing in 3D display,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 41(1), 124–127 (2010).
[Crossref]

Javidi, B.

A. Stern, Y. Yitzhaky, and B. Javidi, “Perceivable light fields: matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE 102(10), 1571–1587 (2014).
[Crossref]

Jiang, C. C.

X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011).
[Crossref]

C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010).
[Crossref]

Junya, N.

Kakeya, H.

H. Kakeya, “A Full-HD Super-Multiview Display with Time-Division Multiplexing Parallax Barrier,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 49(1), 259–262 (2018).
[Crossref]

Kim, E. S.

Kim, S.-K.

Kosuke, T.

Kurita, M.

Kvavle, J.

D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018).
[Crossref]

Lanman, D.

G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012).
[Crossref]

Lee, B. R.

Lee, C. H.

Lee, H. H.

Li, H.

X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014).
[Crossref]

Li, Y.

Liang, H.

Lin, C.

Lin, S. F.

Lin, T. H.

Lindsey, M.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Liu, B.

Liu, S.

Liu, X.

X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014).
[Crossref]

Ma, C. Y.

Y. C. Chang, C. Y. Ma, and Y. P. Huang, “Crosstalk suppression by image processing in 3D display,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 41(1), 124–127 (2010).
[Crossref]

Monk, A.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Nago, N.

Nygaard, E.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Ohara, R.

Okuyama, F.

Pearson, M.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Peatross, J.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Peng, Z.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Poon, T. C.

D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018).
[Crossref]

Qaderi, K.

D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018).
[Crossref]

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Qin, Z.

P. Y. Chou, J. Y. Wu, S. H. Huang, C. P. Wang, Z. Qin, C.-T. Huang, P. Y. Hsieh, H. H. Lee, T. H. Lin, and Y. P. Huang, “Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement,” Opt. Express 27(2), 1164–1177 (2019).
[Crossref]

Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
[Crossref]

Raskar, R.

G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012).
[Crossref]

Rasmussen, J.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Rogers, W.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Sakamoto, Y.

Sang, X.

Smalley, D.

D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018).
[Crossref]

Smalley, D. E.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Son, J. Y.

Squire, K.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Stern, A.

A. Stern, Y. Yitzhaky, and B. Javidi, “Perceivable light fields: matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE 102(10), 1571–1587 (2014).
[Crossref]

Takaki, Y.

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Tran, T.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Ueno, T.

Van Wagoner, J.

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Vo, S.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

Wang, C. P.

Wang, J.

Wang, K.

Wetzstein, G.

G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012).
[Crossref]

Wu, J. Y.

Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
[Crossref]

P. Y. Chou, J. Y. Wu, S. H. Huang, C. P. Wang, Z. Qin, C.-T. Huang, P. Y. Hsieh, H. H. Lee, T. H. Lin, and Y. P. Huang, “Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement,” Opt. Express 27(2), 1164–1177 (2019).
[Crossref]

Wu, Y.

Xing, S.

Xu, D.

Yan, B.

L. Yang, X. Sang, X. Yu, B. Liu, B. Yan, K. Wang, and C. Yu, “A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction,” Opt. Express 26(26), 34412–34427 (2018).
[Crossref]

X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011).
[Crossref]

Yang, L.

Yasuhiro, T.

Yitzhaky, Y.

A. Stern, Y. Yitzhaky, and B. Javidi, “Perceivable light fields: matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE 102(10), 1571–1587 (2014).
[Crossref]

Yoneyama, T.

Yu, C.

Yu, X.

Yuan, J.

Zhang, Q.

Zhou, J.

Adv. Opt. Photonics (1)

J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013).
[Crossref]

Appl. Opt. (1)

Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. (2)

H. Kakeya, “A Full-HD Super-Multiview Display with Time-Division Multiplexing Parallax Barrier,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 49(1), 259–262 (2018).
[Crossref]

Y. C. Chang, C. Y. Ma, and Y. P. Huang, “Crosstalk suppression by image processing in 3D display,” Dig. Tech. Pap. - Soc. Inf. Disp. Int. Symp. 41(1), 124–127 (2010).
[Crossref]

IEEE Comput. Grap. Appl. (1)

G. Wetzstein, D. Lanman, M. Hirsch, W. Heidrich, and R. Raskar, “Compressive light field displays,” IEEE Comput. Grap. Appl. 32(5), 6–11 (2012).
[Crossref]

Inf. Disp. (1)

X. Liu and H. Li, “The progress of light field 3-D displays,” Inf. Disp. 30(6), 6–14 (2014).
[Crossref]

J. Soc. Inf. Disp. (1)

Z. Qin, P. Y. Chou, J. Y. Wu, Y. T. Chen, C. T. Huang, N. Balram, and Y. P. Huang, “Image formation modeling and analysis of near-eye light field displays,” J. Soc. Inf. Disp. 27(4), 238–250 (2019).
[Crossref]

J. Vis. (1)

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3), 33 (2008).
[Crossref]

Nature (2)

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and G. Beausoleil, “A multi-directional backlight for a wide-angle, glasses-free three-dimensional display,” Nature 495(7441), 348–351 (2013).
[Crossref]

D. E. Smalley, E. Nygaard, K. Squire, J. Van Wagoner, J. Rasmussen, S. Gneiting, K. Qaderi, J. Goodsell, W. Rogers, M. Lindsey, K. Costner, A. Monk, M. Pearson, B. Haymore, and J. Peatross, “A photophoretic-trap volumetric display,” Nature 553(7689), 486–490 (2018).
[Crossref]

Opt. Eng. (1)

X. Sang, F. C. Fan, S. Choi, C. C. Jiang, C. Yu, B. Yan, and W. Dou, “Three-dimensional display based on the holographic functional screen,” Opt. Eng. 50(9), 091311 (2011).
[Crossref]

Opt. Express (14)

X. Sang, X. Gao, X. Yu, S. Xing, Y. Li, and Y. Wu, “Interactive floating full-parallax digital three dimensional light-field display based on wavefront recomposing,” Opt. Express 26(7), 8883–8889 (2018).
[Crossref]

L. Yang, X. Sang, X. Yu, B. Liu, B. Yan, K. Wang, and C. Yu, “A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction,” Opt. Express 26(26), 34412–34427 (2018).
[Crossref]

J. He, Q. Zhang, J. Wang, J. Zhou, and H. Liang, “Investigation on quantitative uniformity evaluation for directional backlight autostereoscopic displays,” Opt. Express 26(8), 9398–9408 (2018).
[Crossref]

C. Yu, J. Yuan, F. C. Fan, C. C. Jiang, S. Choi, X. Sang, C. Lin, and D. Xu, “The modulation function and realizing method of holographic functional screen,” Opt. Express 18(26), 27820–27826 (2010).
[Crossref]

J. Y. Son, C. H. Lee, O. O. Chernyshov, B. R. Lee, and S.-K. Kim, “A floating type holographic display,” Opt. Express 21(17), 20441–20451 (2013).
[Crossref]

S. F. Lin and E. S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017).
[Crossref]

S. Liu and H. Hua, “A systematic method for designing depth-fused multifocal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
[Crossref]

H. Huang and H. Hua, “Systematic characterization and optimization of 3D light field displays,” Opt. Express 25(16), 18508–18525 (2017).
[Crossref]

T. Yasuhiro, T. Kosuke, and N. Junya, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
[Crossref]

T. Ueno and T. Yasuhiro, “Super multi-view near-eye display to solve vergence-accommodation conflict,” Opt. Express 26(23), 30703–30715 (2018).
[Crossref]

T. Yasuhiro and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).
[Crossref]

H. Huang and H. Hua, “effect of ray position sampling on the visual responses of 3D light field displays,” Opt. Express 27(7), 9343–9360 (2019).
[Crossref]

P. Y. Chou, J. Y. Wu, S. H. Huang, C. P. Wang, Z. Qin, C.-T. Huang, P. Y. Hsieh, H. H. Lee, T. H. Lin, and Y. P. Huang, “Hybrid light field head-mounted display using time-multiplexed liquid crystal lens array for resolution enhancement,” Opt. Express 27(2), 1164–1177 (2019).
[Crossref]

H. Huang and H. Hua, “Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays,” Opt. Express 27(18), 25154–25171 (2019).
[Crossref]

Opt. Photonics News (1)

D. Smalley, T. C. Poon, H. Gao, J. Kvavle, and K. Qaderi, “Volumetric Displays: Turning 3-D Inside-Out,” Opt. Photonics News 29(6), 26–33 (2018).
[Crossref]

Proc. IEEE (2)

A. Stern, Y. Yitzhaky, and B. Javidi, “Perceivable light fields: matching the requirements between the human visual system and autostereoscopic 3-D displays,” Proc. IEEE 102(10), 1571–1587 (2014).
[Crossref]

Y. Takaki, “High-density directional display for generating natural three-dimensional images,” Proc. IEEE 94(3), 654–663 (2006).
[Crossref]

Supplementary Material (1)

NameDescription
» Visualization 1       This video shows the 3D display result of the glaze horses in the viewing angle of ±20°. The motion parallax is continuous and smooth.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) The conventional glasses-free stereoscopic display with low-density viewpoints. (b) The proposed light field display with high-density viewpoints.
Fig. 2.
Fig. 2. The schematic diagram of the proposed super multi-view light field display system. (a) The system configurations of the display prototype. (b) The distribution of viewpoints for human eye. (c) The simulations of the radiation energy pattern of 8 viewpoint perspectives produced by the NCLA at viewing distance. (d)Diagram of the VD-HFS modulation light beams.
Fig. 3.
Fig. 3. The schematic diagram of the proposed light field pick-up and reconstruction. (a) The retina-based light field pick-up method. (b) The novel image mapping method based on backward ray-tracing for light field reconstruction.
Fig. 4.
Fig. 4. (a) The schematic diagram of the vertical crosstalk between different viewpoints with the use of the conventional stray backlight and the LCM. (b) The schematic diagram of low crosstalk between different viewpoints with the use of the VC-PDB and the LCM.
Fig. 5.
Fig. 5. Simulation of SSIM for different perspectives of the 3D scene. (a) Perspectives captured from different angles. (b) Simulation results of the light field display with the conventional stray backlight. (c) SSIM of the light field display with the conventional stray backlight. (d) Simulation results of the light field display with the VC-PDB. (e) SSIM of the light field display with the VC-PDB.
Fig. 6.
Fig. 6. Comparison of the displayed 3D effects produced by two backlight methods with 32 viewpoints. (a) The 3D effect produced by the light field display with the conventional stray backlight and LCM. (b) The 3D effect produced by the proposed light field display.
Fig. 7.
Fig. 7. Illustration of the proposed system for crosstalk measurements. (a) Measuring crosstalk configuration and diagram. (b) The luminance and crosstalk distributions for 8 viewpoints at the viewing plane.
Fig. 8.
Fig. 8. Experimental results for glaze horses based on the proposed light field display prototype. (a) The arrangement of the experimental target scene. (b) The experimental results are captured at the angle of 0° when the camera focuses on the front, middle, and rear position, respectively. (c)The glaze horses are captured with multiple angles when the camera focuses on the LCD panel (see Visualization 1).

Tables (1)

Tables Icon

Table 1. The specific parameters of the optical system

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

Ω = 2 M arctan ( P 2 D )
V ρ = g N b D
[ m r i g h t m l e f t ] = [ 1 0 0 1 ] [ f l o o r [ T r i g h t ( N 1 ) Δ m ] f l o o r [ T l e f t ( N 1 ) Δ m ] ] + [ 1 1 ]
O m ( i , j ) = P m , l ( i , j )
[ l m ] = [ 1 1 0 0 ] [ N f l o o r ( d f l o o r ( d / d b b ) b / b N N ) ] + [ 0 0 0 1 ] [ 1 m ]
{ d = 3 × j + 3 × i × tan θ + k N = a × b × c
{ θ v e r t i c a l = 2 arctan W p 2 d θ h o r i z o n t a l = 2 arctan W l f 2 d

Metrics