Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-mode depth-fused display with viewer tracking

Open Access Open Access

Abstract

A real-mode depth-fused display is proposed by employing an integral imaging method in the depth-fused display system with viewer tracking. By giving depth-fusing effect between a transparent display and a floated planar two-dimensional image generated by the real-mode integral imaging method, a three-dimensional image is generated in front of the display plane unlike conventional depth-fused displays. The viewing angle of the system is expanded with a viewer tracking method. In addition, dynamic vertical and horizontal motion parallax can be given according to the tracked position of the viewer. As the depth-fusing effect is not dependent on the viewing distance, accommodation cue and motion parallax are provided for a wide range of viewing position. We demonstrate the feasibility of our proposed method by experimental system.

© 2015 Optical Society of America

1. Introduction

Three-dimensional (3D) displays have been considered as next generation displays for a long time. Although autostereoscopic 3D displays have been studied more than hundred years, realization of a high-quality 3D image as good as an object in real world is still hard to achieve [1]. The reason is that required amount of information is too burdensome to achieve with current technologies. Moreover, the trade-off between the viewing parameters also hinders the realization of the natural and high-quality 3D images [2, 3].

It is known that four physiological factors of depth perception are essential for achieving natural 3D experience, which are binocular disparity, convergence, accommodation and motion parallax [1, 4]. Binocular disparity and convergence are easily implemented in conventional 3D display systems due to the moderated requirement of resources, which require only two different view images for human eyes. However, the other factors are hard to be reached because the required amount of the information is multiplicative along with the number of views. Based on the ray optics or directional view approaches, at least two different view images should be incident on one eye in order to induce accommodation response. It is called super multi-view condition [5]. For realizing the super multi-view condition, very dense bundle of rays should be provided. In the previous studies, it was usually achieved by employing multiple display devices or high resolution displays [6–8].

Recently, some of pioneering work proposed that compressive optimization of the view images can greatly reduce the system complexity [9, 10]. Also, the viewing angle of the system can be increased with tracking system combined [11]. However, the compressive display methods require high speed displays for achieving temporal multiplexing of the system. They are promising technology, but further development of the display technologies such as transparency or driving speed is required for realizing commercial quality of the system. Moreover, view-image based accommodation is highly dependent on the observer’s viewing distance because the density of view image decreased as the observer becomes distant from the display.

Depth-fused displays (DFDs), however, provides accommodation cue with a different approach. It exploits physiological characteristics of the human eye to provide accommodation. When the viewer observes depth-fused 3D images, the focusing position is adjusted to give the maximum contrast of the retinal image, which is the combination of overlapped plane images [12–14]. It is also reported that the difference at the edge of the superimposed image gives binocular disparity which results in the perception of depth change according to the luminance ratio [15, 16].

Consequently, the accommodation property of the DFD system is maintained regardless of the viewer’s distance from the display, as the luminance ratio of the front and rear image is the only factor that affects the accommodation, and it is not altered by the viewing distance. By using this property of the DFD, the viewing condition can be greatly improved, combined with the tracking technology.

In this paper, further developed DFD system is presented by employing a viewer tracking method in addition to the combination with an integral imaging method [17]. Combination of viewer tracking and integral imaging, or DFD and integral imaging method have advantages of expansion of depth range and viewing angle, but resolution of the image may be degraded [17–19]. Similar to the previous method, combination of the DFD with an integral imaging system can increase the depth range and viewing angle at the same time. In this configuration, the integral imaging system only shows a plane image, which is just floated from the display plane to the new position. It is also reported that combination of multi-layered display as an elemental image can increase the viewing characteristics of the reconstructed 3D images [20–22]. However, it requires bulky system volume because of the large collimating lens. Compared to the floating displays, which optically translate the image using one large floating lens, the integral-imaging-combined DFD can be implemented compactly without increasing the thickness of the system [17]. With the real-mode integral imaging system, the floated image can be located in front of the lens array, and it can give depth-fusing effect with the rear image being displayed on the transparent display screen, which is located on the lens array.

As a result, 3D image can be generated in mid-air in front of the display providing accommodation, which is very hard to be achieved with the conventional DFDs or integral imaging displays. In addition, depth-fusing effect gives accommodation regardless of the viewing distance of the observer. Also, the tracking enables the dynamic parallax and widening of viewing angle. For showing the feasibility of the proposed method, we conducted an experiment demonstrating the improved viewing conditions compared to the conventional integral imaging and DFD systems.

2. Viewer tracking method

The proposed system is composed of two parts as shown in Fig. 1. One is the viewer-tracked integral imaging system, and the other is the viewer-tracked DFD. The viewer-tracked integral imaging system is based on the previous studies, but the difference is that the proposed system reconstructs only 2D planar images instead of providing 3D images as the 3D effect is given by the depth-fusing effect. In addition, the viewing angle of the system is also kept as small as possible, which just covers the both eyes of the observer because the viewing angle can be extended with the tracking method, and it results in minimization of resolution loss. Also, motion parallax does not need to be provided in the integral imaging system because dynamic parallax is provided with the tracking system. Consequently, the spatial resolution of the reconstructed image can be relatively less degraded compared to the conventional integral imaging system providing similar depth range and viewing angle.

 figure: Fig. 1

Fig. 1 Configuration of the proposed system.

Download Full Size | PDF

2.1 Viewer tracking depth-fused display

The viewing angle of DFD can be defined as the maximum angular span where the viewer observes correct overlapped image. However, in a conventional DFD system, the viewing angle is extremely limited because even slight misalignment of the overlapped image breaks the depth-fusing condition. As a result, the DFD generates a correct 3D image only at the fixed viewing direction. Previous research proposed a solution of the narrow viewing angle by adapting multi-view method for providing shifted view images along with the shifted position of the viewer [17]. However, the provision of the extra viewing positions requires sacrifice of the spatial resolution of the image or additional display device for other viewpoints. However, dynamic shifting of the image can solve this problem. It can be simply implemented by adopting viewer tracking system [18]. The viewing direction of the observer can be obtained from the tracking sensor, and the image can be shifted to compensate the viewer’s deviation angle from the center of the display as illustrated in Fig. 2.

 figure: Fig. 2

Fig. 2 Concept of the compensation for the base images along with the viewer’s off-axis position.

Download Full Size | PDF

In addition to the expansion of the viewing angle, the tracking method gives another benefit to the DFD system, which is the motion parallax. While the previous study proposed a DFD system with multiple viewing zones, the system could not give the parallax of the object. That means, although the viewer changes the position, the parallax of the observed image is not changed as in the glasses-type stereoscopic 3D images. However, by updating the source image according to the corresponding parallax images, motion parallax can be provided through the system. These kinds of tracking method are commonly used in the horizontal-parallax-only system to give vertical parallaxes [23, 24]. However, as the DFD system does not have any parallaxes in a stacked configuration, both horizontal and vertical parallax should be given with the tracking method. In order to provide parallaxes in a DFD system, view images having angular parallax are converted into base images for the DFD according to the viewer’s position.

2.2 Viewer tracking integral imaging for a depth-fused display

It is reported that tracking is an effective method for achieving wider viewing angle in an integral imaging system [19, 25]. In our proposed method, real-mode integral imaging system is used for displaying the front base image of a depth-fused 3D image. As mentioned earlier, the reconstructed image is a plane image, of which the viewing angle covers the viewer’s both eyes. Because the viewing angle is extended by the tracking method, viewing angle does not need to be wider than the aforementioned condition, and it keeps the spatial resolution of the image as high as possible.

Viewing angle of integral imaging indicates the maximum allowance of the viewer’s position in angular direction. It can be calculated from the geometry of the lens and the elemental image as shown in Fig. 3 and in Eqs. (1) and (2) [2]. Ωc is the viewing direction; Ωs is the span of viewing angle; g is the gap between the lens array and the elemental images; s is the shifting distance of the elemental images from the original position; and pl is the size of elemental image and elemental lens. As shown in the equation, the larger area of the elemental image results in a wider viewing angle. Or by declining the optical axis with shifted elemental images, the center of the viewing angle can be steered according to the tracked viewer’s position, which is called adapted elemental images [19, 25]. However, the angular span of the viewing angle is decreased as the declined angle becomes steeper. It can be also calculated from the geometry of the elemental image and elemental lens as shown in Eqs. (1) and (2) with the shifting distance s.

 figure: Fig. 3

Fig. 3 Geometry of the viewing angle for the shifted elemental image.

Download Full Size | PDF

Ωs=arctan(s+pl/2g)arctan(spl/2g),
Ωc=12[arctan(s+pl/2g)+arctan(spl/2g)].

Figure 4 shows the viewing angle of the example system with 10 mm lens array having focal length of 22 mm and 50 mm of central depth plane (CDP), where the floated images are located according to the lens equation. As shown in the graph in Fig. 4, the span of the viewing angle decreases as the deviation of the observer’s viewing direction becomes steeper. The decreased viewing angle should cover the both eyes in order to prevent the cracking or flipping of the reconstructed images. The coverage viewing angle Ωcv can be calculated as follows.

Ωcv=2tan1IPD2D,
where D stands for the viewing distance of the observer, and IPD is the interpupillary distance, which is usually 65 mm for adults. Usually, viewing distance in an integral imaging system with n elemental lenses is longer than minimum viewing distance Dm, which is given as g(n-1), and the coverage angle is maximized at the minimum viewing angle. As a result, if the maximum coverage viewing angle Ωcv is greater than the span of the viewing angle Ωs, a correct plane image is observed by the viewer as illustrated in Fig. 5.

 figure: Fig. 4

Fig. 4 Span of viewing angle according to the viewing direction.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Coverage angle for binocular condition.

Download Full Size | PDF

Figure 6 shows the criteria which decide the viewing angle of the proposed system. As shown in the graph, smaller angle value between the viewing angle decided by the coverage angle and the tracking limit of the sensor decides the viewing angle of the system. The example system shown in the graph is composed of 1 mm lens array with focal length of 3.3 mm, and the CDP of the system is 30 mm. Provided that the Dm is 500 mm, the maximum coverage angle becomes approximately 8 degrees, and it results in 45 degrees of maximum viewing angle for one-side which gives total 90 degrees of viewing angle. However, if the tracking limit of the system is 35 degrees, it results in the narrowing of the effective viewing angle to 70 degrees. As shown in the examples, viewing angle can be changed according to the system parameters. Therefore, a proper viewing angle should be found for specific applications in order to conserve the spatial resolution as high as possible.

 figure: Fig. 6

Fig. 6 Criteria for viewing angle.

Download Full Size | PDF

3. Experiment

For showing the feasibility of the proposed method, experimental system is implemented. The experimental system adopts two liquid crystal displays (LCDs). One is for implementation of integral imaging system and the other is for depth-fusing effect. The LCD for depth-fusing effect is modified by eliminating the backlight unit and the rear polarization film so that it becomes transparent. Between the two LCDs, a lens array is located as shown in the conceptual diagram in Fig. 1.

For the tracking sensor, an off-the-shelf webcam is used (C920, Logitech), whose field-of-view is approximately 70 degrees horizontally and 40 degrees vertically. Blender is used to generate parallax images with equiangular spacing [26]. 60 view images for horizontal direction and 15 view images for vertical direction are captured, resulting in total 900 view images within the field-of-view of the tracking sensor. For demonstrating the image preparation process, image processing procedure is illustrated in Fig. 7(a) with some of the sample view images. For each view image, corresponding depth information is saved for preparation of the depth-fused images. According to the linear depth blending function shown in Eq. (4), the view image is separated into front and rear base images as shown in Fig. 7(b) [12–15].

If(x,y)=1z(x,y)×{1I0(x,y)},Sf(x,y)=S0(x,y)×z(x,y),Ir(x,y)=1{1z(x,y)}×{1I0(x,y)},Sr(x,y)=S0(x,y)×{1z(x,y)},
where If and Ir represent the intensity values of the front and rear images, and Sf and Sr indicate the saturation of the images. I0, S0, z are the normalized intensity, saturation, and depth value of the original 3D image for a pixel (x,y). Figure 8 graphically shows how the values and saturations of pixel are divided. As transmissive displays (which are LCDs) are used, the value (1-I0) is divided according to the depth position instead of I0 unlike the case of emissive displays. It results in the linear blending by controlling the transmittance of each panel.

 figure: Fig. 7

Fig. 7 Preparation of source image: (a) procedure of image processing for the proposed system, (b) sample images for two different viewing directions.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Saturation and value of the front and rear base images for a specific pixel S0 and I0. The relationship Sf:Sr = z:(1-z) and (1-If):(1-Ir) = z:(1-z) are satisfied for the front and rear pixel values.

Download Full Size | PDF

We should remark that mismatching of the coordinate of the front and rear base image is ignored in this system because the depth range of the system (50 mm) is relatively small compared to the expected viewing distance (1000 mm). However, if the depth range of the system becomes small and comparable to the viewing distance, the rear image should be expanded according to the field-of-view of the observer, and the distance z of the pixel point should be calculated in diopter, which is reciprocal distance of meter.

After preparing the front and rear base images, the front base images are converted into elemental images for the integral imaging system [17]. According to the floating distance, the magnification factor is decided, which is followed by cropping and resizing of the original image. Obtained elemental images are abutted together to get the elemental image array. And then the images are pre-shifted according to the Eq. (2) in order to achieve higher performance of the system, but it can be done dynamically with the improved display and tracking algorithm.

Figure 9 shows the tracked observer’s viewing direction and corresponding set of view images. In the experiment, the images are pre-processed, and the system only detects the viewer’s position and shows the relevant view images according to the position.

 figure: Fig. 9

Fig. 9 Perspective view images of the experimental object.

Download Full Size | PDF

Configuration of the experimental system is shown in Fig. 10. For the lens array, 13 by 13 square Fresnel lenses are used, of which the focal length is 22 mm with 10 mm of size. The gap between the front lens array and the display is 39.3 mm, and the floated images are located 50 mm in front of the lens array. As a result, the depth range of the experimental system is 0 – 50 mm in front of the lens array. The innate viewing angle of the integral imaging system is 15 degrees which is large enough to cover the both eyes, and the viewing angle is increased by 70 degrees horizontally and 40 degrees vertically according to the tracking sensor. The specification of the experimental system is summarized in Table 1.

 figure: Fig. 10

Fig. 10 Configuration of the experimental setup: (a) schematic diagram of the experimental setup, (b) photograph of the experimental setup observed from the rear side.

Download Full Size | PDF

Tables Icon

Table 1. Specification of the experimental system

Visualization 1 in Fig. 11 shows the real-time tracking and response of view images. Because the camera is in a fixed location instead of the viewer’s position, the front image, provided by integral imaging is cracked, but it is correctly reconstructed at the observer’s position. As shown in the Visualization 2 in Fig. 12, a corresponding set of base images shown by the DFD system according to the obtained viewing direction of the observer is correctly overlapped, resulting in a depth-fused 3D image with motion parallax. However, in a steep viewing position, image cracking is observed due to the aberration of the lens array. The cracking can be solved by adopting pinhole array, but it gives reduced brightness. Or elemental image can be pre-distorted according to the tracked viewer’s position. For the tracking system, we adopted Kanade-Lucas-Tomasi (KLT) algorithm provided by Matlab, and the central position of the detected face is used as a position of the viewer for calculating the image transformation [27, 28].

 figure: Fig. 11

Fig. 11 Tracking of the observer and image response according to the tracked viewer’s position. According to the detected viewer’s position, front elemental image is shifted and the parallax is provided (Visualization 1).

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Experimental result of the proposed system at the viewer’s position: (a) left, (b) central, and (c) right viewpoint. Change of the parallax is shown within the viewing zone (Visualization 2).

Download Full Size | PDF

In order to achieve larger depth range, the viewing perspective of human eye should be considered because the overlapped area for front and rear image changes according to the viewing distance. By using a depth sensor, precise viewing position of the observer can be obtained, and it can be used to calculate the perspective of the observer. Also, a lens array with longer focal length can be used in the system for increasing the floating distance, but the longer focal length results in the narrowing of the viewing angle due to the larger f-number. Although tracking method is used, the minimum viewing angle of the system should be larger than the coverage angle of both eyes as mentioned in section 2.

Seam noise of the experimental system may affect the accommodation because the high frequency at the seam of elemental lens induces the accommodation on the lens surface. As reported in the research of Jung et al., high-angular-resolution integral imaging system can reduce the accommodation deviation [29]. In addition, the seam noise can be reduced by adopting fine size of elemental lens or compensating the luminance distribution of elemental image according to the lens characteristics.

Recent studies compare the DFD and compressive displays because DFDs and compressive displays have very similar system structures which can be interchangeable [30, 31]. As DFD method is based on the luminance modulation, the modulation range of the depth depends on the luminance. For example, if the luminance value is 1 in an image (for white), it cannot be positioned on the front panel because it is transparent. This kind of problem results in a limited representation of 3D effect in DFDs.

Table 2 shows the comparison of viewing characteristics of the conventional DFD, integral imaging display, and the proposed system. Provided that the same display panels (2000 × 2000 resolution with 200 × 200 mm2 of area) are used for each example, we compare the quantitative viewing characteristics such as spatial resolution, depth range, and viewing angle. We assumed that the elemental lens with 2 mm of focal length and 4mm of size is used for the example integral imaging system, and the elemental lens with 16.7 mm of focal length and 1 mm of size is used for the example system with the proposed method.

Tables Icon

Table 2. Comparison of viewing parameters for DFD, integral imaging, and proposed method

4. Conclusion

In this paper, real-mode depth-fused display is proposed by employing integral imaging method and viewer tracking system. By generating 3D image with depth-fusion and viewer tracking method, viewing condition of the system is greatly improved for a single tracked user. With the depth-fusing effect and its volumetric characteristics, the proposed system can provide most of the major 3D depth cues including accommodation and motion parallax in a very wide range of viewing zone. In addition, the volume of the system is as thin as conventional flat panel displays, while the expressible depth range of the system is greater than its physical volume due to the characteristics of integral imaging system. If the lens array with shorter focal length is adopted, the current gap between the lens array and rear LCD can be more reduced. We believe that the proposed method is a promising technology for personal 3D applications such as cell phones, media players or laptop computers.

Acknowledgments

This work was supported by the IT R&D program of MSIP/KEIT [Fundamental technology development for digital holographic contents]. The human figures shown in the Figs. 1, 2, 7, 9, 11 and 12 are provided by Alexander Lee, and they are used under Creative Commons Attribution 3.0.

References and links

1. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]  

2. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]  

3. S. Park, J. Yeom, Y. Jeong, N. Chen, J.-Y. Hong, and B. Lee, “Recent issues on integral imaging and its applications,” J. Inf. Disp. 15(1), 37–46 (2014). [CrossRef]  

4. E. B. Goldstein, Sensation and Perception, 9th ed. (Cengage Learning, 2014)

5. Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologramlike video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154–166 (1997). [CrossRef]  

6. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010). [CrossRef]   [PubMed]  

7. S.-K. Kim, D.-W. Kim, Y. M. Kwon, and J.-Y. Son, “Evaluation of the monocular depth cue in 3D displays,” Opt. Express 16(26), 21415–21422 (2008). [CrossRef]   [PubMed]  

8. Y. Takaki, Y. Urano, S. Kashiwada, H. Ando, and K. Nakamura, “Super multi-view windshield display for long-distance image information presentation,” Opt. Express 19(2), 704–716 (2011). [CrossRef]   [PubMed]  

9. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31(4), 1–11 (2012). [CrossRef]  

10. A. Maimone, G. Wetzstein, M. Hirsch, D. Lanman, R. Raskar, and H. Fuchs, “Focus 3D: compressive accommodation display,” ACM Trans. Graph. 32(5), 1–13 (2013). [CrossRef]  

11. A. Maimone, R. Chen, H. Fuchs, R. Raskar, and G. Wetzstein, “Wide field of view compressive light field display using a multilayer architecture and tracked viewers,” SID Symp. Digest Tech. Papers45, 509–512 (2014). [CrossRef]  

12. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004). [CrossRef]  

13. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]   [PubMed]  

14. S. Ravikumar, K. Akeley, and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19(21), 20940–20952 (2011). [CrossRef]   [PubMed]  

15. S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res. 44(8), 785–793 (2004). [CrossRef]   [PubMed]  

16. S. Suyama, H. Sonobe, T. Soumiya, A. Tsunakawa, H. Yamamoto, and H. Kuribayashi, “Edge-based depth-fused 3D display,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (OSA, 2013), paper DM2A.3.

17. S.-G. Park, J.-H. Jung, Y. Jeong, and B. Lee, “Depth-fused display with improved viewing characteristics,” Opt. Express 21(23), 28758–28770 (2013). [CrossRef]   [PubMed]  

18. C. Lee, S. Diverdi, and T. Höllerer, “Depth-fused 3D imagery on an immaterial display,” IEEE Trans. Vis. Comput. Graph. 15(1), 20–33 (2009). [CrossRef]   [PubMed]  

19. Z.-L. Xiong, Q.-H. Wang, S.-L. Li, H. Deng, and C.-C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014). [CrossRef]   [PubMed]  

20. S. Sawada and H. Kakeya, “Integral volumetric imaging using decentered elemental lenses,” Opt. Express 20(23), 25902–25913 (2012). [CrossRef]   [PubMed]  

21. H. Kakeya, “Realization of undistorted volumetric multiview image with multilayered integral imaging,” Opt. Express 19(21), 20395–20404 (2011). [CrossRef]   [PubMed]  

22. H. Kakeya and Y. Arakawa, “Autostereoscopic display with real-image virtual screen and light filters,” Proc. SPIE 4660, 349–357 (2002). [CrossRef]  

23. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360̊ light field display,” ACM Trans. Graph. 26(3), 40 (2007). [CrossRef]  

24. A. Jones, K. Nagano, J. Liu, J. Busch, X. Yu, M. Bolas, and P. Debevec, “Interpolating vertical parallax for an autostereoscopic three-dimensional projector array,” J. Electron. Imaging 23(1), 011005 (2014). [CrossRef]  

25. G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009). [CrossRef]   [PubMed]  

26. Blender, http://blender.org.

27. B. D. Lucas and T. Kanade, “An iterative image registrattion technique with an application to stereo vision,” in Int. Joint Conf. Artificial Intelligence (IJCAI) (1981), pp. 121–130.

28. C. Tomasi and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon University Tech. Report, CMU-CS-91–132, (1991).

29. J.-H. Jung, K. Hong, and B. Lee, “Effect of viewing region satisfying super multi-view condition in integral imaging,” SID Symposium Digest of Technical Papers43, 883–886 (2012). [CrossRef]  

30. F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 33, 60 (2015).

31. R. Narain, R. A. Albert, A. Bulbul, G. J. Ward, M. S. Banks, and J. F. O’Brien, “Optimal presentation of imagery with focus cues on multi-plane displays,” ACM Trans. Graph. 34(4), 59 (2015). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1: MOV (5169 KB)      Tracking of the observer and image response according ot the tracked viewer's position
Visualization 2: MOV (12175 KB)      Experimental result

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Configuration of the proposed system.
Fig. 2
Fig. 2 Concept of the compensation for the base images along with the viewer’s off-axis position.
Fig. 3
Fig. 3 Geometry of the viewing angle for the shifted elemental image.
Fig. 4
Fig. 4 Span of viewing angle according to the viewing direction.
Fig. 5
Fig. 5 Coverage angle for binocular condition.
Fig. 6
Fig. 6 Criteria for viewing angle.
Fig. 7
Fig. 7 Preparation of source image: (a) procedure of image processing for the proposed system, (b) sample images for two different viewing directions.
Fig. 8
Fig. 8 Saturation and value of the front and rear base images for a specific pixel S0 and I0. The relationship Sf:Sr = z:(1-z) and (1-If):(1-Ir) = z:(1-z) are satisfied for the front and rear pixel values.
Fig. 9
Fig. 9 Perspective view images of the experimental object.
Fig. 10
Fig. 10 Configuration of the experimental setup: (a) schematic diagram of the experimental setup, (b) photograph of the experimental setup observed from the rear side.
Fig. 11
Fig. 11 Tracking of the observer and image response according to the tracked viewer’s position. According to the detected viewer’s position, front elemental image is shifted and the parallax is provided (Visualization 1).
Fig. 12
Fig. 12 Experimental result of the proposed system at the viewer’s position: (a) left, (b) central, and (c) right viewpoint. Change of the parallax is shown within the viewing zone (Visualization 2).

Tables (2)

Tables Icon

Table 1 Specification of the experimental system

Tables Icon

Table 2 Comparison of viewing parameters for DFD, integral imaging, and proposed method

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

Ω s =arctan( s+ p l /2 g )arctan( s p l /2 g ),
Ω c = 1 2 [ arctan( s+ p l /2 g )+arctan( s p l /2 g ) ].
Ω cv =2 tan 1 I PD 2D ,
I f (x,y)=1z(x,y)×{1 I 0 (x,y)}, S f (x,y)= S 0 (x,y)×z(x,y), I r (x,y)=1{1z(x,y)}×{1 I 0 (x,y)}, S r (x,y)= S 0 (x,y)×{1z(x,y)},
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.