Abstract

A three-dimensional (3D) display with smooth motion parallax and large viewing angle is demonstrated, which is based on a microlens array and a coded two-dimensional (2D) image on a 50 inch liquid crystal device (LCD) panel with the resolution of 3840 × 2160. Combining with accurate depth cues expressing, the flipping images of the traditional integral imaging (II) are eliminated, and smooth motion parallax can be achieved. The image on the LCD panel is coded as an elemental image packed repeatedly, and the depth cue is determined by the repeated period of elemental image. To construct the 3D image with complex depth structure, the varying period of elemental image is required. Here, the detailed principle and coding method are presented. The shape and the texture of a target 3D image are designed by a structure image and an elemental image, respectively. In the experiment, two groups of structure images and their corresponding elemental images are utilized to construct a 3D scene with a football in a green net. The constructed 3D image exhibits obviously enhanced 3D perception and smooth motion parallax. The viewing angle is 60°, which is much larger than that of the traditional II.

© 2015 Optical Society of America

1. Introduction

As 3D display can provide vivid senses, the design and development of techniques for the 3D imaging have attracted considerable attention of scientists and engineers [1], such as multi-view 3D displays with parallax barriers or lenticular sheets [2], volumetric 3D displays [3], holographic displays [4] and so on. Among these various 3D display techniques, Integral imaging (II) is an attractive method to obtain a 3D image with a lens array and an ordinary two-dimensional (2D) display device [5]. The reconstructed 3D image is integrated by the lens array from elemental images, which are displayed on the display device. With the lens array and elemental images, II addresses the angular distribution of light rays. Unlike holography, there is no need for a coherent light source, and the full natural color image can be easily realized in II. It also provides full parallax and continuously varying viewpoints. However, there are many problems still existing and limiting applications of II. In the recent years, many important research efforts have been devoted to overcome limitations of II, such as the improvement of resolution [6–8 ], the extension of depth of field [9–11 ], the eliminating image distortion [12] and the enlarging of viewing angle [13–18 ]. The limitation of viewing angle is one of primary disadvantages of II.

In II, elemental lenses and elemental images are densely packed in the array. The 3D light field is formed only when each elemental image is observed through the corresponding elemental lens. If the observer’s viewing direction largely deviates from the center viewing range, the elemental images are observed through the neighboring elemental lens. In this situation, flipped images are observed. To prevent image flipping, optical barriers were used to physically isolate the light rays passing through the other elemental lens [19]. However, the viewing angle can’t be enhanced. The viewing angle θ in II is due to the limitation of the area where each lens can display its corresponding elemental image. It can be calculated by the following Eq. (1):

θ=2arctan(T0/2g)
where T0 is the lens pitch which is the repeated period of lenses, and g is the gap between lens array and display panel. It is desirable to increase the lens size to enhance the viewing angle. However, the lens pitch cannot be arbitrarily large because the viewing resolution inversely proportional to the period of the lens array. The increased lens pitch of II will degrade the 3D effect. To overcome the conflict between the viewing angle and resolution, some methods were proposed. A moving lens array with low fill factor was used to improve the viewing angle [16]. In another method, a curved lens array and a flexible screen were employed [17].Tracking II system made elemental images for each viewer provide large viewing angle [18]. Recently, we employed active partially pixelated masks and tracking device to realize 3D display with 56°viewing angel [20]. For the most proposed researches about viewing angle enhancement, structures of the display system are adjusted or the tracking device is introduced.

Here, a large viewing angle three-dimensional display with accurate depth cues is demonstrated. In the proposed display system, we utilize the traditional optical structures of II with the microlens array and flat LCD panel. To overcome the flipping effect caused by elemental images observed through different elemental lens, elemental images are repeatedly packed on flat display panel. By employing the novel image coding method, all the repeated images observed at or beyond the center viewing range are taken as the continuous views. In the image coding procession, the detailed depth information map is needed to display the 3D structure. The elemental images are utilized as the texture to cover the surface of designed structure. In the experiment, the proposed 3D display method can provide obvious 3D perception and more than 60 degree viewing angle without any viewing jump for several observers simultaneously.

2. Principle

2.1 Basic realization mechanism

To realize the requested continuous structure, the elemental image is packed repeatedly on the LCD panel. The schematic diagram of the large viewing angle 3D imaging system architecture is shown in Fig. 1 . The period of microlens array is T0, which is equal to the pitch of lens. The repeated period T of the elemental image is substantially identical to T0. The scale ratio of the repeated period of elemental images to the period of the mirolens array determines the 3D visual effects with different depth values. When the scale ratio is greater than 1.000, the floating image can be observed. When the scale ratio is less than 1.000, the deep image can be observed. With the scale ratio of 1.000, the light rays emit from different elemental images are parallel to each other, neither the floating image nor the deep image can be observed. According to geometric analysis, the relationship among T, T0 and the display depth din (dout) can be represented as follows:

 figure: Fig. 1

Fig. 1 Schematic diagram of the large viewing angle 3D images. (a) Formation of floating 3D image. (b) Formation of deep image.

Download Full Size | PPT Slide | PDF

T=dout+gdoutT0
T=dingdinT0

In the proposed system, the gap g between the microlens array and the LCD panel is equal to the focus of lenses. In this situation, it can be seen that points on the display panel emit light rays passing though the center of each lens. These light rays form the 3D image with motion parallax. Fig. 1 is lateral schematic diagram which presents the formation principle of the floating image and deep image. As illustrated, the icon elements are packed as the repeated period T on the display panel. For the floating image or deep image, the icon elements are arranged reversed or not, respectively. To describe the coding image on the LCD panel, the parameters x and y are defined as the horizontal and vertical coordinates of the image. Here, the distribution function of the icon image with the unit length width is defined as f(x, y) (0<x<1, 0<y<1). Color and intensity of all the points with the coordinate (x, y) on the image are expressed by the distribution function. For the deep image, the coded map F (x, y) on the LCD display panel is constituted by the unit images arranged as the repeated period T, and the distribution function can be expressed as f(x[x/T]intTT,y[y/T]intTT), The repeated elemental images are reversed for the floating image, and the distribution function is f(1x[x/T]intTT,1y[y/T]intTT). The size and repeated period of the icon elements can be calculated by Eq. (2) or (3) .Taking the mathematical expression of T, the expression function of the 2D coding map on the display panel changed by the display depth can be expressed as follows:

F(x,y;dout)=f(1x[x/T]intTT,1y[y/T]intTT)where,T=dout+gdoutT0
F(x,y;din)=f(x[x/T]intTT,y[y/T]intTT)where,T=dingdinT0

By employing Eqs. (4) and (5) to code the image on the display panel, the enlarged elemental image can be displayed at arbitrary distance from the screen accurately with a large viewing angle. The synthetic magnification factor between the deep image (floating image) and the elemental image depends on the T / T0 of the system. The magnification is equal to the absolute value of 1 / (1.000–T / T0). For example, if the elemental images and the microlens array have a scale ratio of 0.995, the display system will exhibit deep images with a magnification of 1 / (1.000- 0.995) = 200. Similarly, for the floating images display system having a scale ratio of 1.005, the magnification is |1 / (1.000- 1.005)| = 200.

2.2 2D image coding method of multiple display content

To obtain multiple display content at different display planes, multiple elemental images and corresponding display depth values are required. For an example, the coding procession of two layer display content is illustrated in Fig. 2 . Two kinds of elemental images and corresponding depth values are described as f1(x, y), f2(x, y) and d1, d2, respectively. According to the Eqs. (4) and (5) , the pixels’ distribution F1(x,y;d1) and F2(x,y;d2) can be calculated.

 figure: Fig. 2

Fig. 2 Coding process of two layer display content with different depth

Download Full Size | PPT Slide | PDF

To display the two layers with right perspective, the calculated two pixels’ distribution functions are added together to get the total 2D coding image Ftotal(x,y) on the LCD panel as shown in formula (6). Here, the symbol “+” means one coding image covers another according to the displayed position. The closer 3D image display to the observer, the higher its coding map covers on the LCD panel. As shown in Fig. 2, the letters “C” are designed to display in front of the letters “A”, the elemental images’ array F2(x,y;d2) covers on the elemental images’ array F1(x,y;d1).

Ftotal(x,y)=F1(x,y;d1)+F2(x,y;d2)

With the coded result in Fig. 2 and the lens array, the 3D images with digitally generated tompographic images can be displayed, which provides the correct depth information with smooth motion parallax and the large viewing angle. Pictures of the displayed 3D images taken at different positions for two planar images are shown in Fig. 3 .

 figure: Fig. 3

Fig. 3 Pictures taken at different positions for 3D display with digitally generated tompographic images

Download Full Size | PPT Slide | PDF

2.3 2D image Coding method for the content with the complex depth information

To construct the 3D image with complex shape instead of a plane with a definite depth, subpixels’ distribution of 2D image on the LCD panel should be coded according to the change of depth values. A structure map which contains the depth information is used to construct a 3D image shape. The implementation method of the large viewing 3D display with complex structure is illustrated in the Fig. 4 . As an example, a group of structure depth map and an elemental image is utilized to calculate the suppixels’ distribution function on the LCD panel. The structure depth map d(x, y) has the shape with a half sphere in front of the microlens array. The elemental image has a content f(x, y) with a green letter “C”. The number of depth values in the structure map is equal to the amount of suppixels.

 figure: Fig. 4

Fig. 4 Formation principle of the proposed 3D display with complex depth structure

Download Full Size | PPT Slide | PDF

The display depth of different region is determined by the icon elements’ repeat period of the corresponding region. As the half sphere is designed to display outside the screen, the depth distribution function d(x, y) of the structure map can be substituted into Eq. (4) to get the coding image Fstructure(x, y).

Fstructure(x,y)=f(1x[x/T]intTT,1y[y/T]intTT)where,T=d(x,y)+gd(x,y)T0

As shown in Fig. 4, elemental images’ repeated period of coded image is changed according to the depth values. To present the coded result clearly, the region with sharp depth variation in the lower-left corner is magnified. As illustrated in the magnified picture, the icon element letters are stretched gradually with the depth variation of the structure map. When the designed 3D image is designed to display inside the screen, the d(x, y) should be substituted into Eq. (5) to get the following equation.

Fstructure(x,y)=f(x[x/T]intTT,y[y/T]intTT)where,T=d(x,y)gd(x,y)T0

By forming the coded picture as formula (7) or (8) , the 3D image with the structure depth information can be displayed, and the elemental images are utilized as the texture to cover its surface. The displayed elemental image’ magnification factors are changed according to the depth values’ distribution of the structure depth map.

According to the principle and coding method, there is a limitation. The display content should be repeated texture. By employing several groups of repeated texture and their corresponding depth maps, an integrated 3D scene can be constructed.

3. Experimental results

Here, two structure depth maps and their corresponding elemental images are used to construct the multiple display content with complex depth information. In our experimental setup, the lens pitch is 0.5 mm with the fabricating precision of ± 1 μm, and the 50 inch LCD Panel with the resolution of 3840 × 2160 is used. The gap between the microlens array and the LCD panel is 6mm.

For the experimental 3D reconstruction, a 3D scene with a football in a green net is planned to display. The two groups of structure images and elemental images are shown in Figs. 5 (a) and 5 (b) . The structure depth map d1(x, y) and the elemental image f1(x, y) of group 1 are utilized to construct the shape and texture of a rectangular green net. The structure depth map d2(x, y) and the elemental image f2(x, y) of group 2 are utilized to construct the shape and texture of a football. Both of them are designed outside the screen. Equation (7) is employed to get the 2D coding image.

 figure: Fig. 5

Fig. 5 Structure images and their corresponding icon images

Download Full Size | PPT Slide | PDF

By using the content of two groups, the subpixels’ distribution functions are calculated. Fstructure1(x, y) is the coding image of a net, and Fstructure2(x, y) is the coding image of a football. To form the scene with a ball in the front of a net, Fstructure1(x, y) is covered by Fstructure2(x, y), as shown in Fig. 6 . The total coded image Ftotal(x, y) which is the combination of two distribution functions can be described by the following formula,

 figure: Fig. 6

Fig. 6 Coding process of the designed 3D scene

Download Full Size | PPT Slide | PDF

Ftotal(x,y)=Fstructure1(x,y)+Fstructure2(x,y)

With the total coded image and the microlens array, the designed 3D content can be displayed. Pictures of the displayed 3D image taken at different positions are shown in Fig. 7 , which have obvious 3D effect. For the observer, there are no any flipping images which cause uncomfortable feeling, and both binocular parallax and smooth motion parallax can be achieved.

 figure: Fig. 7

Fig. 7 Photographs taken at different viewing angles (see Visualization 1)

Download Full Size | PPT Slide | PDF

To analyze the light field of the 3D display system, the epipolar-plane image (EPI) is introduced [21]. The EPI is defined as a stack of one dimensional lines captured from different viewing locations along the horizontal direction. The light field projection of every point in the 3D scene can be expressed as a line in the image. The slant of the lines in the EPI is proportional to the depth of the 3D scene’s corresponding point. The light field produced by a 3D display system can be visualized with the EPI. For the traditional 3D display, the flipped images observed though different lens create the EPI with the discontinuous structure. Here, three EPIs which are built by images captured from 60° viewing angle are illustrated as follows.

As shown in Fig. 8 , three lines of the designed 3D image are marked with different colors. They are used to express the depth information of different part of the designed 3D scene. In the constructed EPIs, the black and white lines represent depths of the football’s different pieces, and green lines represent different depths of the green net. EPIs are exhibited with the continuous structure. Therefore, smooth motion parallax can be obtained by observers at arbitrary positions for the proposed large viewing angle 3D display system.

 figure: Fig. 8

Fig. 8 Light field produced by the designed 3D image (EPI)

Download Full Size | PPT Slide | PDF

According to the formation principle and EPI analysis of the proposed 3D display method, the 2D picture with repeated elemental images and the microlens array can express depth cues and eliminate the flipping effect. Ideally, the viewing angle for 3D image is close to 180° in front of the screen. However, the aberration of the employed lens degrades the 3D imaging quality of the display system, which limits the clear 3D viewing range. Spot diagrams for different viewing angles from the center of the screen are shown in Fig. 9 .When the viewing angle between the observation position and center of screen exceeds 30°, the root mean square (RMS) radius of the spot diagrams is significantly increased. To ensure the high quality of 3D imaging, the viewing range of the demonstrated system is 60° in the front of the screen, which is much larger than the traditional II display system. With the proposed method, dynamic 3D scene can also be achieved as shown in Visualization 2 in Fig. 10 .

 figure: Fig. 9

Fig. 9 Spot diagrams of the microlens array: Viewing angles from the center of the screen are (a)0°, (b)10°,(c) 20°, (d) 30°, (e) 40°. RMS radius of the spot diagrams (a) 1.304 μm, (b) 8.009 μm, (c) 32.435 μm, (d) 87.830 μm, (e) 247.43 μm.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 Photographs of a dynamic 3D scene (see Visualization 2)

Download Full Size | PPT Slide | PDF

4. Conclusion

A large viewing angle three-dimensional display with smooth motion parallax and accurate depth cues is successfully demonstrated. The proposed 3D display method eliminates the flipping images of the traditional II system and effectively enhances the viewing angle. In the image coding process, elemental images and structure images are utilized to calculate the subpixels’ distribution on the LCD panel. The detailed depth map is used to construct the 3D structure and the elemental image is utilized as the texture to cover the surface of designed shape. The repeated period of elemental images determines the displayed depth cues. Multiple groups of the images can be used to construct the complex 3D content. In our experiment, the 3D scene containing a football and a green net is constructed. A high quality 3D display in the 60° viewing angle with smooth motion parallax is realized, which is much larger than the traditional II system.

Acknowledgments

This work is partly supported by the “863” Program (2015AA015902), the National Science Foundation of China (61177018), the Program of Beijng Science and Technology Plan (D121100004812001) and State Key Laboratory of Information Photonics and Optical Communications.

References and links

1. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87 (2011) [CrossRef]   [PubMed]  

2. Y.-C. Chang, L.-C. Tang, and C.-Y. Yin, “Efficient simulation of intensity profile of light through subpixel-matched lenticular lens array for two- and four-view auto-stereoscopic liquid-crystal display,” Appl. Opt. 52(1), A356–A359 (2013). [CrossRef]   [PubMed]  

3. Y. Maeda, D. Miyazaki, T. Mukai, and S. Maekawa, “Volumetric display using rotating prism sheets arranged in a symmetrical configuration,” Opt. Express 21(22), 27074–27086 (2013). [CrossRef]   [PubMed]  

4. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014). [CrossRef]   [PubMed]  

5. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

6. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef]   [PubMed]  

7. A. Schwarz, J. Wang, A. Shemer, Z. Zalevsky, and B. Javidi, “Lensless three-dimensional integral imaging using variable and time multiplexed pinhole array,” Opt. Lett. 40(8), 1814–1817 (2015). [CrossRef]   [PubMed]  

8. Y. Oh, D. Shin, B.-G. Lee, S.-I. I. Jeong, and H.-J. Choi, “Resolution-enhanced integral imaging in focal mode with a time-multiplexed electrical mask array,” Opt. Express 22(15), 17620–17629 (2014). [CrossRef]   [PubMed]  

9. A. Castro, Y. Frauel, and B. Javidi, “Integral imaging with large depth of field using an asymmetric phase mask,” Opt. Express 15(16), 10266–10273 (2007). [CrossRef]   [PubMed]  

10. H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17(11), 1690–1701 (2010). [CrossRef]   [PubMed]  

11. X. Shen, Y.-J. Wang, H.-S. Chen, X. Xiao, Y. H. Lin, and B. Javidi, “Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens,” Opt. Lett. 40(4), 538–541 (2015). [CrossRef]   [PubMed]  

12. H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt. Express 20(21), 23755–23768 (2012). [CrossRef]   [PubMed]  

13. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27(10), 818–820 (2002). [CrossRef]   [PubMed]  

14. Z.-L. Xiong, Q.-H. Wang, S.-L. Li, H. Deng, and C.-C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014). [CrossRef]   [PubMed]  

15. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007). [CrossRef]   [PubMed]  

16. J. S. Jang and B. Javidi, “Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor,” Appl. Opt. 42(11), 1996–2002 (2003). [CrossRef]   [PubMed]  

17. Y. Kim, J. H. Park, S. W. Min, S. Jung, H. Choi, and B. Lee, “Wide-viewing-angle integral three-dimensional imaging system by curving a screen and a lens array,” Appl. Opt. 44(4), 546–552 (2005). [CrossRef]   [PubMed]  

18. G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009). [CrossRef]   [PubMed]  

19. F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]  

20. X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014). [CrossRef]  

21. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph,” in SIGGRAPH '96 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 43–54.

References

  • View by:
  • |
  • |
  • |

  1. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87 (2011)
    [Crossref] [PubMed]
  2. Y.-C. Chang, L.-C. Tang, and C.-Y. Yin, “Efficient simulation of intensity profile of light through subpixel-matched lenticular lens array for two- and four-view auto-stereoscopic liquid-crystal display,” Appl. Opt. 52(1), A356–A359 (2013).
    [Crossref] [PubMed]
  3. Y. Maeda, D. Miyazaki, T. Mukai, and S. Maekawa, “Volumetric display using rotating prism sheets arranged in a symmetrical configuration,” Opt. Express 21(22), 27074–27086 (2013).
    [Crossref] [PubMed]
  4. E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014).
    [Crossref] [PubMed]
  5. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013).
    [Crossref] [PubMed]
  6. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002).
    [Crossref] [PubMed]
  7. A. Schwarz, J. Wang, A. Shemer, Z. Zalevsky, and B. Javidi, “Lensless three-dimensional integral imaging using variable and time multiplexed pinhole array,” Opt. Lett. 40(8), 1814–1817 (2015).
    [Crossref] [PubMed]
  8. Y. Oh, D. Shin, B.-G. Lee, S.-I. I. Jeong, and H.-J. Choi, “Resolution-enhanced integral imaging in focal mode with a time-multiplexed electrical mask array,” Opt. Express 22(15), 17620–17629 (2014).
    [Crossref] [PubMed]
  9. A. Castro, Y. Frauel, and B. Javidi, “Integral imaging with large depth of field using an asymmetric phase mask,” Opt. Express 15(16), 10266–10273 (2007).
    [Crossref] [PubMed]
  10. H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17(11), 1690–1701 (2010).
    [Crossref] [PubMed]
  11. X. Shen, Y.-J. Wang, H.-S. Chen, X. Xiao, Y. H. Lin, and B. Javidi, “Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens,” Opt. Lett. 40(4), 538–541 (2015).
    [Crossref] [PubMed]
  12. H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt. Express 20(21), 23755–23768 (2012).
    [Crossref] [PubMed]
  13. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27(10), 818–820 (2002).
    [Crossref] [PubMed]
  14. Z.-L. Xiong, Q.-H. Wang, S.-L. Li, H. Deng, and C.-C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014).
    [Crossref] [PubMed]
  15. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007).
    [Crossref] [PubMed]
  16. J. S. Jang and B. Javidi, “Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor,” Appl. Opt. 42(11), 1996–2002 (2003).
    [Crossref] [PubMed]
  17. Y. Kim, J. H. Park, S. W. Min, S. Jung, H. Choi, and B. Lee, “Wide-viewing-angle integral three-dimensional imaging system by curving a screen and a lens array,” Appl. Opt. 44(4), 546–552 (2005).
    [Crossref] [PubMed]
  18. G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009).
    [Crossref] [PubMed]
  19. F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
    [Crossref]
  20. X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
    [Crossref]
  21. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph,” in SIGGRAPH '96 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 43–54.

2015 (2)

2014 (4)

2013 (3)

2012 (1)

2011 (1)

2010 (1)

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17(11), 1690–1701 (2010).
[Crossref] [PubMed]

2009 (1)

2007 (2)

2005 (1)

2003 (1)

2002 (2)

1999 (1)

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Arai, J.

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Cai, Y.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Castro, A.

Chang, Y.-C.

Chen, D.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Chen, H.-S.

Chen, N.

Choi, H.

Choi, H.-J.

Cohen, M. F.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph,” in SIGGRAPH '96 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 43–54.

Deng, H.

Dohi, T.

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17(11), 1690–1701 (2010).
[Crossref] [PubMed]

Dou, W.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Frauel, Y.

Gortler, S. J.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph,” in SIGGRAPH '96 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 43–54.

Grzeszczuk, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph,” in SIGGRAPH '96 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 43–54.

Hahn, J.

Hong, J.

Hong, K.

Hong, S.-I.

Hoshino, H.

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Jang, J. S.

Jang, J.-S.

Javidi, B.

Jeong, K.-M.

Jeong, S.-I. I.

Ji, C.-C.

Jo, N.-Y.

Jung, J.-H.

Jung, S.

Kim, H.

Kim, H.-S.

Kim, M.

Kim, Y.

Kim, Y.-H.

Lee, B.

Lee, B.-G.

Li, S.-L.

Liao, H.

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17(11), 1690–1701 (2010).
[Crossref] [PubMed]

Lin, Y. H.

Maeda, Y.

Maekawa, S.

Martinez-Corral, M.

Martínez-Corral, M.

Martínez-Cuenca, R.

Min, S. W.

Min, S.-W.

Miyazaki, D.

Moon, E.

Mukai, T.

Navarro, H.

Nomura, K.

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17(11), 1690–1701 (2010).
[Crossref] [PubMed]

Oh, Y.

Okano, F.

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Park, G.

Park, J. H.

Park, J.-H.

Roh, J.

Saavedra, G.

Sang, X.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Schwarz, A.

Shemer, A.

Shen, X.

Shin, D.

Stern, A.

Szeliski, R.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph,” in SIGGRAPH '96 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 43–54.

Tang, L.-C.

Wang, J.

Wang, K.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Wang, Q.-H.

Wang, Y.-J.

Xiao, X.

Xing, S.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Xiong, Z.-L.

Yan, B.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Yin, C.-Y.

Yu, C.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Yu, X.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Yuan, J.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Yuma, I.

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Zalevsky, Z.

Zhao, T.

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Appl. Opt. (5)

IEEE Trans. Vis. Comput. Graph. (1)

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17(11), 1690–1701 (2010).
[Crossref] [PubMed]

Opt. Commun. (1)

X. Yu, X. Sang, S. Xing, T. Zhao, D. Chen, Y. Cai, B. Yan, K. Wang, J. Yuan, C. Yu, and W. Dou, “Natural three-dimensional display with smooth motion parallax using active partially pixelated masks,” Opt. Commun. 313, 146–151 (2014).
[Crossref]

Opt. Eng. (1)

F. Okano, H. Hoshino, J. Arai, and I. Yuma, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999).
[Crossref]

Opt. Express (8)

H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt. Express 20(21), 23755–23768 (2012).
[Crossref] [PubMed]

G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009).
[Crossref] [PubMed]

Z.-L. Xiong, Q.-H. Wang, S.-L. Li, H. Deng, and C.-C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22(19), 22268–22277 (2014).
[Crossref] [PubMed]

R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007).
[Crossref] [PubMed]

Y. Oh, D. Shin, B.-G. Lee, S.-I. I. Jeong, and H.-J. Choi, “Resolution-enhanced integral imaging in focal mode with a time-multiplexed electrical mask array,” Opt. Express 22(15), 17620–17629 (2014).
[Crossref] [PubMed]

A. Castro, Y. Frauel, and B. Javidi, “Integral imaging with large depth of field using an asymmetric phase mask,” Opt. Express 15(16), 10266–10273 (2007).
[Crossref] [PubMed]

Y. Maeda, D. Miyazaki, T. Mukai, and S. Maekawa, “Volumetric display using rotating prism sheets arranged in a symmetrical configuration,” Opt. Express 21(22), 27074–27086 (2013).
[Crossref] [PubMed]

E. Moon, M. Kim, J. Roh, H. Kim, and J. Hahn, “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express 22(6), 6526–6534 (2014).
[Crossref] [PubMed]

Opt. Lett. (4)

Other (1)

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph,” in SIGGRAPH '96 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 43–54.

Supplementary Material (2)

NameDescription
» Visualization 1: MP4 (2018 KB)      Video of the 3D scene
» Visualization 2: MP4 (1568 KB)      Video of the dynamic 3D scene

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematic diagram of the large viewing angle 3D images. (a) Formation of floating 3D image. (b) Formation of deep image.
Fig. 2
Fig. 2 Coding process of two layer display content with different depth
Fig. 3
Fig. 3 Pictures taken at different positions for 3D display with digitally generated tompographic images
Fig. 4
Fig. 4 Formation principle of the proposed 3D display with complex depth structure
Fig. 5
Fig. 5 Structure images and their corresponding icon images
Fig. 6
Fig. 6 Coding process of the designed 3D scene
Fig. 7
Fig. 7 Photographs taken at different viewing angles (see Visualization 1)
Fig. 8
Fig. 8 Light field produced by the designed 3D image (EPI)
Fig. 9
Fig. 9 Spot diagrams of the microlens array: Viewing angles from the center of the screen are (a)0°, (b)10°,(c) 20°, (d) 30°, (e) 40°. RMS radius of the spot diagrams (a) 1.304 μm, (b) 8.009 μm, (c) 32.435 μm, (d) 87.830 μm, (e) 247.43 μm.
Fig. 10
Fig. 10 Photographs of a dynamic 3D scene (see Visualization 2)

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

θ = 2 arc tan ( T 0 / 2 g )
T = d o u t + g d o u t T 0
T = d i n g d i n T 0
F ( x , y ; d o u t ) = f ( 1 x [ x / T ] int T T , 1 y [ y / T ] int T T ) w h e r e , T = d o u t + g d o u t T 0
F ( x , y ; d i n ) = f ( x [ x / T ] int T T , y [ y / T ] int T T ) w h e r e , T = d i n g d i n T 0
F t o t a l ( x , y ) = F 1 ( x , y ; d 1 ) + F 2 ( x , y ; d 2 )
F s t r u c t u r e ( x , y ) = f ( 1 x [ x / T ] int T T , 1 y [ y / T ] int T T ) w h e r e , T = d ( x , y ) + g d ( x , y ) T 0
F s t r u c t u r e ( x , y ) = f ( x [ x / T ] int T T , y [ y / T ] int T T ) w h e r e , T = d ( x , y ) g d ( x , y ) T 0
F t o t a l ( x , y ) = F s t r u c t u r e 1 ( x , y ) + F s t r u c t u r e 2 ( x , y )

Metrics