Abstract

We propose a depth-fused display (DFD) with enhanced viewing characteristics by hybridizing the depth-fusing technology with another three-dimensional display method such as multi-view or integral imaging method. With hybridization, the viewing angle and expressible depth range can be extended without changing the size of the volume of the system compared to the conventional DFD method. The proposed method is demonstrated with experimental system.

© 2013 Optical Society of America

1. Introduction

The three-dimensional (3D) display has become one of mainstreams in the current display market. Current 3D displays are focused on stereoscopic type displays, which are based on the glasses type method. They are mainly found from TV and movie applications [1]. The stereoscopic display, however, has limited applications due to encumbering glasses which are essential to separate left and right images. The multi-view type autostereoscopic displays such as the lenticular type display and the parallax barrier type display are generally accepted as next-generation 3D displays [1,2]. However, the multi-view type autostereoscopic display still has issues related with human at current level of technology. One of the main problems is the lack of comfortableness of a reconstructed 3D image caused by mismatching of the accommodation and the convergence. In this viewpoint, the depth-fused display (DFD) is considered as a good solution of the problems as it provides accommodation cue [36]. Because the DFD can provide accommodation, it is reported that the visual fatigue of reconstructed image is as small as that of two-dimensional (2D) displays [7].

The DFD, however, has some limitations in depth-range and viewing angle. These weaknesses of the DFD originate from the condition of the depth-fusing effect. The depth-fusing effect is an adjustment of accommodation according to the luminance ratio of plane images of the DFD. The depth-fusing effect occurs when two images are overlapped. For this reason, if an observer moves out of the viewing position, overlapped images will be separated, resulting in the failure of depth-fusion [36].

As a result, the narrow viewing angle of DFDs confines their applications to a single-observer display. Because of this problem some kinds of DFD system were just used as two-depth display showing just two planar images without a depth-fusing effect [8,9]. These kinds of display can provide images for multiple users, but it is hard to say that those systems are providing 3D images. In order to solve this problem, the two-view DFD was proposed using an anisotropic screen and a scattering polarizer [10]. It exploits polarization for separating the view of rear images. However, the use of polarization limits the further development of the number of views. For improving the viewing characteristics of the previous system, we proposed a DFD system with combination of other kinds of 3D displays for improving expressible depth range and providing more viewing positions, but those systems still have limitations in the expansion of viewing parameters [11,12].

In this paper, we propose a novel DFD that consists of virtual multi-view rear images and transparent front images. By inserting the optical element, the rear image can optically change its location according to the position of the observer, and it increases number of the viewing positions of the system as well as the depth range without affecting the whole volume of the system, which is as compact as conventional DFDs. The feasibility of the proposed method is demonstrated using an experimental system.

2. Principle

In this section, a method for improving the viewing qualities in the DFD is introduced. The basic concept is inserting an optical element such as a lenticular lens or a 2D lens array between the front and the rear layers of the DFD. By inserting an optical element, the system can give enhanced viewing characteristics because the rear image can be optically shifted according to directions of an observer, resulting in the prevention of image separation. The depth position of the image can also be optically translated to extend the depth range of the system. In these configurations, the front image of the DFD is fixed, while the rear image of the DFD is optically translated by the optical element according to the observer’s viewing direction. Consequently, the fused 3D image can be shown at different viewing positions and have larger depth range.

2.1 Enhancement of viewing angle

The schematic configuration of a system in comparison with a conventional system is shown in Fig. 1. While the conventional system shows separated front and rear images at an oblique viewing position like the observer 1 in Fig. 1(a), the proposed viewing-angle-enhanced system can provide correct depth-fused images to the multiple viewing points as illustrated in Fig. 1(b). When a lenticular lens is inserted, the lens and rear display panel can be considered as a conventional multi-view system. However, instead of showing multiple images having different parallaxes as in a conventional multi-view display, showing shifted-view images as depicted in Fig. 1(b) allows additional viewing points to DFD by preventing the image separation of the depth-fused image although the observer is not positioned at the exact front of the system.

 

Fig. 1 Comparison of the conventional DFD and viewing-angle-enhanced DFD: (a) Conventional DFD showing separated images for the side view, (b) DFD with multiple viewing points showing correct depth-fused image at the side view.

Download Full Size | PPT Slide | PDF

For a natural depth-fusion of the front and the rear images, exact amount of shift should be calculated. For a rear image of the proposed system, horizontally shifted images are generated and interwoven as in a conventional multi-view display. The amount of shift sn is decided by the gap of the DFD system, gd, and tangent of viewing angle θ as shown in Eq. (1):

sn=gdtanθn,
θn=tan1((n1)npgl),
where the viewing angle is given by Eq. (2), and the subscript n denotes n-th view point from the central view. np represents the pitch of one view of the base image for a multi-view system, and gl is the gap between the base image and the lenticular lens. It is identical to the arctangent of view-interval over viewing distance.

The number of views is an important factor deciding the quality of the reconstructed images. While large number of views can provide smoother transition between the views, the resolution of the image will be deteriorated with the increase of the number of views, and vice versa. The maximum shift is limited by the viewing angle of lenticular lens. In a multi-view display the viewing angle is decided by the specification of lenticular sheet [13, 14].

2.2 Enhancement of depth range

Similar to the enhancement of viewing angles, if the lens array is inserted, the rear part of the system works as if it is an integral imaging system. An integral imaging system is composed of a lenslet array and an elemental image. A lens array is composed of square-shaped lenslets which look like an eye of a fly, and an elemental image is a set of small images, each of which corresponds to each lenslet of the lens array. If an elemental image is observed through a lens array, a 3D image is reconstructed according to the configuration of the system [15]. Because an integral imaging system can represent a plane at the specific depth position, the rear plane can be optically translated to the farther position in order to increase the gap of a DFD system. The conceptual diagram of the system in comparison with a conventional system is shown in Fig. 2.

 

Fig. 2 Comparison of the conventional DFD and depth-enhanced DFD. (a) Conventional DFD and (b) DFD with increased depth range with the same thickness of the display system.

Download Full Size | PPT Slide | PDF

The integral imaging can work in the virtual mode showing virtual images located behind the lens array. With virtual mode, the depth range of the system d can be increased without changing the volume of the system ds as illustrated in Fig. 2(b). In Fig. 2(b) the total depth of the system is much smaller than the expressible depth range of the system, while the conventional system has the same depth range with the dimension of the system as shown in Fig. 2(a). The extended position d of the virtual plane is calculated according to Eq. (3) derived from the simple lens law.

d=gifgif,
where f is the focal length of the lenslet and gi is the gap between lens array and elemental imaging plane. By replacing the rear image of the DFD system with integral imaging system, the depth range of the system can be expanded to the amount of d.

2.3 Enhancement of both of the viewing characteristics

Although the concepts of the increasing viewing angle and depth are explained separately, both characteristics can be increased at the same time because of the similarity of the multi-view system and the integral imaging system as shown in the work of Wu et al [16]. In the integral imaging system the viewer observes the magnified set of elemental images, meaning that the small portion of the elemental image set is actually shown at a specific viewing condition. The remaining area can be used for shifted views as in a one-dimensional integral imaging system using a lenticular lens which is very similar to a multi-view display. If the unused area is replaced with the shifted rear images, the system can increase the viewing angle as well as the depth range.

In order to give a correct depth fusion, the sizes of the front and rear images should be matched considering the specifications of the optical components. The most important factor is the magnification of the lenslet. According to the magnification, other parameters such as resolution, viewing angle and depth ranges are decided. The magnification is defined by the ratio of d to gi, which is the distance from the lens array to the virtual imaging plane divided by the gap between the lens array and the elemental image. As shown in Fig. 3, the active area of the elemental image is reduced by the magnification factor mg, and the remaining area can be filled with shifted view images. The depth range of the system is increased by the magnification factor compared to the gap between the lens array and the elemental image. The maximum viewing angle is defined with regard to the case in which the viewer observes the outermost view of the elemental image, and the normal viewing angle is the angle of viewing direction from the central view. The viewing characteristics enhanced by lens array are summarized in Table 1. The viewing angles of additional views and the increase of number of views according to the magnification of a system are described in Fig. 4.

 

Fig. 3 Relationship of the parameters of system components.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Parameters and Their Relationship of the Viewing Characteristics and System Components

 

Fig. 4 Angles of viewing direction (viewing angles) from the center to each viewing point according to the magnification of the rear integral imaging system with parameters in Table 2.

Download Full Size | PPT Slide | PDF

The subscript in viewing angle in Table 1 denotes that the sequence of the viewing angle from the central viewing point and its maximum value can have half of the number of views nv. The increased number of views also reduces the crosstalk during the transition between the viewing positions. The crosstalk arises when the viewing position is changed to another position, but the dense viewing positions can provide the smoother transitions as in the conventional multi-view system. However, the increased number of view requires higher magnification factor as shown in Fig. 4, which aggravates the resolution of the image. It has trade-off relation between the number of view points and resolution, and it will be optimized for the purpose of application.

We should also remark that the viewing angles and depth ranges increase at the same time with some restrictions. Because both parameters are highly dependent on each other, only optimal sets of the viewing angle and the depth range are provided. While the conventional DFDs are relatively free to adjust the depth range by changing the gap, the proposed system should have optimal depth decided by the multiplication of the magnification mg and the gap gi of the integral imaging system in order to provide the correct depth-fusing effect.

The resolution of the rear image is decreased by the magnification factor as well. For reducing the degradation of resolution, a small-pixel-pitched display can be adopted for a rear display. By applying proper image processing, the sizes of the front and rear images can be matched to provide a correct depth-fused image.

Table 2 shows the comparison of viewing characteristics of the integral imaging, the proposed method and the DFD method. The values in the parenthesis show expected parameter values of each system with the same display components given in the experiment in this paper. Because the proposed method converts the resolution to the enhanced viewing angle, the resolution of the system is relatively degraded compared to the conventional DFD method, but the depth range and viewing angle can be improved. While the integral imaging method provides modest depth range around the central depth plane according to the characteristic equation (Ref. 15), the proposed method can provide larger depth range using the depth-fusing effect. By adjusting the parameters in Table 1, a proper balance of the resolution, the viewing angle and the depth range suitable for a specific application can be decided.

Tables Icon

Table 2. Comparison of the Integral Imaging, Proposed Method, and the Depth-fused Display

3. Experiments

In order to demonstrate the feasibility of the system, we implemented experimental system using flat panel displays (FPDs) and a lens array. Two kinds of FPDs having different pixel pitches are used to make the system. Because the rear image is magnified, the pixel density of the rear display should be higher than that of the front display by the amount of magnification in order to prevent the degradation of the image resolution. The magnification, the resolution, and the depth range of the system are decided according to the specifications of the components. The specifications of the system parameters are summarized in Table 3.

Tables Icon

Table 3. Specification of the Experimental Setup

A computer generated 3D object is used for a source image. The 2D image and its corresponding depth map are generated using graphic software. With the given depth map, the 2D image is separated into the front and the images according to the linear luminance and depth weight [6]. Although the dioptric distance should be used in calculating the luminance modulation, conventional distance is used for simplifying the calculation because the result is not severely altered when the viewing distance is far enough compared to the depth range of the system. If the depth range of the system becomes comparatively larger, however, the viewing distance should be fixed, and dioptric distance should be used for calculation of the luminance modulation. The calculated rear image is then converted into an elemental image, but for assigning the shifted rear view images, only active portions of the elemental image are recorded. The obtained images of front, rear and converted rear images are shown in Fig. 5.

 

Fig. 5 Source images for the proposed DFD system: (a) front image, (b) rear image, and (c) converted rear image. The red box shows the enlarged elemental images, and the dashed square indicates the area of a lenslet of a lens array. The object is modeled by Kuhn Industries and used under Creative Commons Attribution 3.0.

Download Full Size | PPT Slide | PDF

The remaining area of the converted rear elemental image is filled with the shifted rear images in order to extend the viewing angle of the system. According to the given parameters of the system, the shifting distance is calculated and applied to the converted rear image horizontally and vertically as shown in Fig. 6(a). Side view images are colored for clarifying the shift of the view. Figure 6(b) shows the combined elemental image of the shifted images and Fig. 6(c) shows an elemental image without any coloring. The red boxes in Figs. 6(b) and 6(c) indicate the elemental image assigned to a lenslet, and it is composed of the shifted sub-elemental images as shown in the magnified image at the lower right corner.

 

Fig. 6 Elemental image set: (a) individual shifted rear images according to the viewing directions with differentiated color for the purpose of calibration, (b) combination of nine different elemental images shown in (a) (The elemental image in the red box shows the combination of nine shifted-elemental images.), and (c) combined elemental image. The red boxes show an elemental image allocated to a lenslet. The object is modeled by Kuhn Industries and used under Creative Commons Attribution 3.0.

Download Full Size | PPT Slide | PDF

The experimental system is shown in Fig. 7(a). The gap between the front display and rear lens array is about 7.7 mm, which gives virtual image at 33 mm behind the lens array although the total depth of the system is only 20 mm. Compared to the proposed method, the integral imaging system with the same experimental parameters can give 4 mm of depth range according to the characteristic equation in the Ref. 15, and the conventional DFD method with the same specifications can provide about 15 mm of depth range due to the system dimension.

 

Fig. 7 Experimental system: (a) front view of the experimental setup and (b) side view of the experimental setup. The total depth of the system is 20 mm.

Download Full Size | PPT Slide | PDF

Figure 8 shows additional viewing positions according to the viewing directions. Front and rear images are clearly fused when the viewer is located out of the central position. The expanded depth of the system can be confirmed using a half mirror as shown in Fig. 9. By combining the depth-fused image with the real target object as demonstrated in Fig. 9(c), the depth range of the system can be demonstrated. In the system the orange and the blue arrows indicate the optical path of depth-fused image and the target image, respectively. In Fig. 9(a) the target is located at the head of the jetfighter while the target is located at the tale in Fig. 9(b). In both images, the target is clearly focused, and the gap between the two targets is 30 mm, which is slightly smaller than the calculated depth range (33 mm).

 

Fig. 8 Result of the experimental system (a) according to the different viewing angles and (b) its movie file (Media 1). The object is modeled by Kuhn Industries and used under Creative Commons Attribution 3.0.

Download Full Size | PPT Slide | PDF

 

Fig. 9 Comparison of the depth range of the proposed system using half mirror: (a) target located at the front image plane, (b) target located at the rear plane, and (c) half-mirror setup for finding depth range. The object is modeled by Kuhn Industries and used under Creative Commons Attribution 3.0.

Download Full Size | PPT Slide | PDF

The moiré pattern that occurs due to the sub pixel structure of the rear display appear through the lenslet array. In order to reduce the color moire, magnification should be reduced or horizontal subpixel structure can be adopted for reducing the horizontal color dispersion without compromising the magnification factor [17].

4. Conclusion

In this paper, we proposed a DFD system which can be seen at wider viewing angles with extended depth range. We expect that the proposed system has advantages in viewing conditions as well as the quality of the reconstructed 3D images compared to other 3D display methods. Another advantage of the proposed system is its compactness. Although other volumetric displays require bulky volumes to represent voxels, the proposed system can generate voxels within a virtual space. Based on the analysis of the system parameters, proper adjustment of each viewing parameter would widen the range of applications for the DFD with specific purposes such as a personal portable 3D display with large depth range or a wide viewing angle display for multiple viewers.

Acknowledgment

This work was supported by the National Research Foundation of Korea grant funded by the Korean government (MSIP) through the National Creative Research Initiatives Program (#2007-0054847).

References and links

1. H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99(4), 540–555 (2011). [CrossRef]  

2. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]  

3. S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res. 44(8), 785–793 (2004). [CrossRef]   [PubMed]  

4. S. Suyama, H. Sonobe, T. Soumiya, A. Tsunakawa, H. Yamamoto, and H. Kuribayashi, “Edge-based depth-fused 3D display,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2013), paper DM2A.3.

5. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). [CrossRef]   [PubMed]  

6. S. Ravikumar, K. Akeley, and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19(21), 20940–20952 (2011). [CrossRef]   [PubMed]  

7. S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev. 3, 82–89 (2005).

8. J.-W. Seo and T. Kim, “Double-layer projection display system using scattering polarizer film,” Jpn. J. Appl. Phys. 47(3), 1602–1605 (2008). [CrossRef]  

9. E. Walton, A. Evans, G. Gay, A. Jacobs, T. Wynne-Powell, G. Bourhill, P. Gass, and H. Walton, “Seeing depth from a single LCD,” in SID Symposium Digest of Technical Papers (Blackwell Publishing Ltd, 2009), Vol. 40, No. 1, pp. 1395–1398.

10. M. Date, S. Sugimoto, H. Takada, and K. Nakazawa, “Depth-fused 3D (DFD) display with multiple viewing zones,” Proc. SPIE 6778, 677817 (2007). [CrossRef]  

11. S.-g. Park, J.-H. Jung, Y. Kim, and B. Lee, “Depth-fused display with enhanced viewing region,” in Biomedical Optics and 3-D Imaging, OSA Technical Digest (Optical Society of America, 2012), paper DSu1C.5.

12. S.-g. Park, J.-H. Jung, and B. Lee, “Depth expansion of depth-fused display based on integral imaging method,” in Proc. 12th Int. Meeting on Inf. Display (IMID 2012) (Daegu, Korea, 2012), 473–474.

13. J.-H. Jung, J. Yeom, J. Hong, K. Hong, S.-W. Min, and B. Lee, “Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display,” Opt. Express 19(21), 20468–20482 (2011). [CrossRef]   [PubMed]  

14. J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng. 42(11), 3326–3333 (2003). [CrossRef]  

15. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]  

16. F. Wu, H. Deng, C.-G. Luo, D.-H. Li, and Q.-H. Wang, “Dual-view integral imaging three-dimensional display,” Appl. Opt. 52(20), 4911–4914 (2013). [CrossRef]   [PubMed]  

17. Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt. 48(11), 2178–2187 (2009). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE99(4), 540–555 (2011).
    [CrossRef]
  2. B. Lee, “Three-dimensional displays, past and present,” Phys. Today66(4), 36–41 (2013).
    [CrossRef]
  3. S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res.44(8), 785–793 (2004).
    [CrossRef] [PubMed]
  4. S. Suyama, H. Sonobe, T. Soumiya, A. Tsunakawa, H. Yamamoto, and H. Kuribayashi, “Edge-based depth-fused 3D display,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2013), paper DM2A.3.
  5. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express18(11), 11562–11573 (2010).
    [CrossRef] [PubMed]
  6. S. Ravikumar, K. Akeley, and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express19(21), 20940–20952 (2011).
    [CrossRef] [PubMed]
  7. S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).
  8. J.-W. Seo and T. Kim, “Double-layer projection display system using scattering polarizer film,” Jpn. J. Appl. Phys.47(3), 1602–1605 (2008).
    [CrossRef]
  9. E. Walton, A. Evans, G. Gay, A. Jacobs, T. Wynne-Powell, G. Bourhill, P. Gass, and H. Walton, “Seeing depth from a single LCD,” in SID Symposium Digest of Technical Papers (Blackwell Publishing Ltd, 2009), Vol. 40, No. 1, pp. 1395–1398.
  10. M. Date, S. Sugimoto, H. Takada, and K. Nakazawa, “Depth-fused 3D (DFD) display with multiple viewing zones,” Proc. SPIE6778, 677817 (2007).
    [CrossRef]
  11. S.-g. Park, J.-H. Jung, Y. Kim, and B. Lee, “Depth-fused display with enhanced viewing region,” in Biomedical Optics and 3-D Imaging, OSA Technical Digest (Optical Society of America, 2012), paper DSu1C.5.
  12. S.-g. Park, J.-H. Jung, and B. Lee, “Depth expansion of depth-fused display based on integral imaging method,” in Proc. 12th Int. Meeting on Inf. Display (IMID 2012) (Daegu, Korea, 2012), 473–474.
  13. J.-H. Jung, J. Yeom, J. Hong, K. Hong, S.-W. Min, and B. Lee, “Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display,” Opt. Express19(21), 20468–20482 (2011).
    [CrossRef] [PubMed]
  14. J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
    [CrossRef]
  15. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys.44(2), L71–L74 (2005).
    [CrossRef]
  16. F. Wu, H. Deng, C.-G. Luo, D.-H. Li, and Q.-H. Wang, “Dual-view integral imaging three-dimensional display,” Appl. Opt.52(20), 4911–4914 (2013).
    [CrossRef] [PubMed]
  17. Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt.48(11), 2178–2187 (2009).
    [CrossRef] [PubMed]

2013

2011

2010

2009

2008

J.-W. Seo and T. Kim, “Double-layer projection display system using scattering polarizer film,” Jpn. J. Appl. Phys.47(3), 1602–1605 (2008).
[CrossRef]

2007

M. Date, S. Sugimoto, H. Takada, and K. Nakazawa, “Depth-fused 3D (DFD) display with multiple viewing zones,” Proc. SPIE6778, 677817 (2007).
[CrossRef]

2005

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys.44(2), L71–L74 (2005).
[CrossRef]

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

2004

S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res.44(8), 785–793 (2004).
[CrossRef] [PubMed]

2003

J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
[CrossRef]

Akeley, K.

Bahn, J.-E.

J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
[CrossRef]

Banks, M. S.

Chellappan, K. V.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE99(4), 540–555 (2011).
[CrossRef]

Choi, H.

J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
[CrossRef]

Choi, Y.-J.

J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
[CrossRef]

Date, M.

M. Date, S. Sugimoto, H. Takada, and K. Nakazawa, “Depth-fused 3D (DFD) display with multiple viewing zones,” Proc. SPIE6778, 677817 (2007).
[CrossRef]

Deng, H.

Erden, E.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE99(4), 540–555 (2011).
[CrossRef]

Fujikado, T.

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

Hong, J.

Hong, K.

Hosohata, J.

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

Hua, H.

Ishigure, Y.

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

Jung, J.-H.

Kim, J.

Y. Kim, G. Park, J.-H. Jung, J. Kim, and B. Lee, “Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array,” Appl. Opt.48(11), 2178–2187 (2009).
[CrossRef] [PubMed]

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys.44(2), L71–L74 (2005).
[CrossRef]

Kim, S.-K.

J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
[CrossRef]

Kim, T.

J.-W. Seo and T. Kim, “Double-layer projection display system using scattering polarizer film,” Jpn. J. Appl. Phys.47(3), 1602–1605 (2008).
[CrossRef]

Kim, Y.

Lee, B.

Li, D.-H.

Liu, S.

Luo, C.-G.

Min, S.-W.

J.-H. Jung, J. Yeom, J. Hong, K. Hong, S.-W. Min, and B. Lee, “Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display,” Opt. Express19(21), 20468–20482 (2011).
[CrossRef] [PubMed]

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys.44(2), L71–L74 (2005).
[CrossRef]

Nakazawa, K.

M. Date, S. Sugimoto, H. Takada, and K. Nakazawa, “Depth-fused 3D (DFD) display with multiple viewing zones,” Proc. SPIE6778, 677817 (2007).
[CrossRef]

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

Ohtsuka, S.

S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res.44(8), 785–793 (2004).
[CrossRef] [PubMed]

Park, G.

Ravikumar, S.

Sakai, S.

S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res.44(8), 785–793 (2004).
[CrossRef] [PubMed]

Saveljev, V. V.

J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
[CrossRef]

Seo, J.-W.

J.-W. Seo and T. Kim, “Double-layer projection display system using scattering polarizer film,” Jpn. J. Appl. Phys.47(3), 1602–1605 (2008).
[CrossRef]

Son, J.-Y.

J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
[CrossRef]

Sugimoto, S.

M. Date, S. Sugimoto, H. Takada, and K. Nakazawa, “Depth-fused 3D (DFD) display with multiple viewing zones,” Proc. SPIE6778, 677817 (2007).
[CrossRef]

Surman, P.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE99(4), 540–555 (2011).
[CrossRef]

Suyama, S.

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res.44(8), 785–793 (2004).
[CrossRef] [PubMed]

Takada, H.

M. Date, S. Sugimoto, H. Takada, and K. Nakazawa, “Depth-fused 3D (DFD) display with multiple viewing zones,” Proc. SPIE6778, 677817 (2007).
[CrossRef]

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res.44(8), 785–793 (2004).
[CrossRef] [PubMed]

Takao, Y.

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

Uehira, K.

S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res.44(8), 785–793 (2004).
[CrossRef] [PubMed]

Urey, H.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE99(4), 540–555 (2011).
[CrossRef]

Wang, Q.-H.

Wu, F.

Yeom, J.

Appl. Opt.

Jpn. J. Appl. Phys.

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys.44(2), L71–L74 (2005).
[CrossRef]

J.-W. Seo and T. Kim, “Double-layer projection display system using scattering polarizer film,” Jpn. J. Appl. Phys.47(3), 1602–1605 (2008).
[CrossRef]

NTT Tech. Rev.

S. Suyama, Y. Ishigure, H. Takada, K. Nakazawa, J. Hosohata, Y. Takao, and T. Fujikado, “Evaluation of visual fatigue in viewing a depth-fused 3-D display in comparison with a 2-D display,” NTT Tech. Rev.3, 82–89 (2005).

Opt. Eng.

J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng.42(11), 3326–3333 (2003).
[CrossRef]

Opt. Express

Phys. Today

B. Lee, “Three-dimensional displays, past and present,” Phys. Today66(4), 36–41 (2013).
[CrossRef]

Proc. IEEE

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE99(4), 540–555 (2011).
[CrossRef]

Proc. SPIE

M. Date, S. Sugimoto, H. Takada, and K. Nakazawa, “Depth-fused 3D (DFD) display with multiple viewing zones,” Proc. SPIE6778, 677817 (2007).
[CrossRef]

Vision Res.

S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res.44(8), 785–793 (2004).
[CrossRef] [PubMed]

Other

S. Suyama, H. Sonobe, T. Soumiya, A. Tsunakawa, H. Yamamoto, and H. Kuribayashi, “Edge-based depth-fused 3D display,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2013), paper DM2A.3.

S.-g. Park, J.-H. Jung, Y. Kim, and B. Lee, “Depth-fused display with enhanced viewing region,” in Biomedical Optics and 3-D Imaging, OSA Technical Digest (Optical Society of America, 2012), paper DSu1C.5.

S.-g. Park, J.-H. Jung, and B. Lee, “Depth expansion of depth-fused display based on integral imaging method,” in Proc. 12th Int. Meeting on Inf. Display (IMID 2012) (Daegu, Korea, 2012), 473–474.

E. Walton, A. Evans, G. Gay, A. Jacobs, T. Wynne-Powell, G. Bourhill, P. Gass, and H. Walton, “Seeing depth from a single LCD,” in SID Symposium Digest of Technical Papers (Blackwell Publishing Ltd, 2009), Vol. 40, No. 1, pp. 1395–1398.

Supplementary Material (1)

» Media 1: MOV (3270 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1

Comparison of the conventional DFD and viewing-angle-enhanced DFD: (a) Conventional DFD showing separated images for the side view, (b) DFD with multiple viewing points showing correct depth-fused image at the side view.

Fig. 2
Fig. 2

Comparison of the conventional DFD and depth-enhanced DFD. (a) Conventional DFD and (b) DFD with increased depth range with the same thickness of the display system.

Fig. 3
Fig. 3

Relationship of the parameters of system components.

Fig. 4
Fig. 4

Angles of viewing direction (viewing angles) from the center to each viewing point according to the magnification of the rear integral imaging system with parameters in Table 2.

Fig. 5
Fig. 5

Source images for the proposed DFD system: (a) front image, (b) rear image, and (c) converted rear image. The red box shows the enlarged elemental images, and the dashed square indicates the area of a lenslet of a lens array. The object is modeled by Kuhn Industries and used under Creative Commons Attribution 3.0.

Fig. 6
Fig. 6

Elemental image set: (a) individual shifted rear images according to the viewing directions with differentiated color for the purpose of calibration, (b) combination of nine different elemental images shown in (a) (The elemental image in the red box shows the combination of nine shifted-elemental images.), and (c) combined elemental image. The red boxes show an elemental image allocated to a lenslet. The object is modeled by Kuhn Industries and used under Creative Commons Attribution 3.0.

Fig. 7
Fig. 7

Experimental system: (a) front view of the experimental setup and (b) side view of the experimental setup. The total depth of the system is 20 mm.

Fig. 8
Fig. 8

Result of the experimental system (a) according to the different viewing angles and (b) its movie file (Media 1). The object is modeled by Kuhn Industries and used under Creative Commons Attribution 3.0.

Fig. 9
Fig. 9

Comparison of the depth range of the proposed system using half mirror: (a) target located at the front image plane, (b) target located at the rear plane, and (c) half-mirror setup for finding depth range. The object is modeled by Kuhn Industries and used under Creative Commons Attribution 3.0.

Tables (3)

Tables Icon

Table 1 Parameters and Their Relationship of the Viewing Characteristics and System Components

Tables Icon

Table 2 Comparison of the Integral Imaging, Proposed Method, and the Depth-fused Display

Tables Icon

Table 3 Specification of the Experimental Setup

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

s n = g d tan θ n ,
θ n = tan 1 ( (n1)np g l ),
d= g i f g i f ,

Metrics