Abstract

In this paper, we analyze the relationship between viewer and viewing zones of integral imaging (II) system and present a partially-overlapped viewing zone (POVZ) based integral imaging system with a super wide viewing angle. In the proposed system, the viewing angle can be wider than the viewing angle of the conventional tracking based II system. In addition, the POVZ can eliminate the flipping and time delay of the 3D scene as well. The proposed II system has a super wide viewing angle of 120° without flipping effect about twice as wide as the conventional one.

© 2014 Optical Society of America

1. Introduction

Integral imaging (II) can reconstruct the true 3D images without glasses and provides both horizontal and vertical parallaxes with continuous views [14]. However, there are still some problems, such as the limitations of 3D image resolution, depth range, and viewing angle which delay the practical application of II. In the past decades, many researchers have focused on solving these problems [58], and many technologies have been proposed, including computer graphic technology, head and eye tracking technology, and so on.

In this paper, we focus on the viewing angle issue of an II system and use computer-generated integral imaging (CGII) to capture the elemental images of 3D scene [911]. The viewing angle is defined as the scope that the viewer can observe the 3D images without obvious imperfection (such as flipping, etc.). In the conventional II system, viewers can watch 3D images in a very narrow region and the flipping images are observed within a slightly bigger angle. Many researchers focus on modifying optical structures of the II system [1218]. The viewing angle is indeed enhanced by using a curved lens array [12], two elemental image masks [13], lens switching [14] and so on. But some of the structures are not practical because it is difficult to fabricate these specific devices. With the development of tracking technology, some researchers use head or eyes tracking technology to enhance the viewing angle. These works are remarkable and good performances are obtained [1922]. A viewer’s position is obtained by head or eyes tracking and the tracking results are used for generating elemental images in proper positions. A great contribution has been made by Gilbae Park et al. to enhance the viewing angle of the II system based on the head tracking [6, 7]. In their methods, the viewer is always located at the central position of the viewing zone. So the viewing angle is limited by the maximum tracking angle of the tracking device [7]. And the flipping and time delay of the reconstructed 3D scene may occur when the viewer moves fast.

In this paper, we propose a partially-overlapped viewing zone (POVZ) based II system with a super wide viewing angle which consists of a conventional II system and a tracking device. In the POVZ II system, the viewing zones are rearranged within 120° to eliminate the flipping 3D images in crosstalk zones of the conventional II system. Besides, the flipping and time delay issues in the conventional tracking based II system are also eliminated. In the POVZ II system, a tracking device obtains the viewer’s 3D position, and the system generates a corresponding adaptive elemental image array (AEIA) which reconstructs the 3D scene based on the tracking information in real time. Then we introduce the generation method for AEIA of the POVZ based on the viewer’s position. In the experiment, we build the POVZ II system with a super wide viewing angle of 120°, which is due to the region of lack of obvious imperfection.

2. Principle of the proposed POVZ II system

As shown in Fig. 1, the architecture of the proposed system is composed of four parts: the input and tracking part includes the parameter and 3D scene data input and the tracking of the viewer’s position, the calculation part obtains the POVZ and virtual camera array information, the pixel mapping process generates the AEIA based on the parallax images, and the display process displays the AEIA for the viewer.

 figure: Fig. 1

Fig. 1 Architecture of the proposed POVZ based II system.

Download Full Size | PPT Slide | PDF

In the paper, section 2.1 explains the difference of principles between the proposed and conventional system. Then we determine the region of the POVZ. Section 2.2 analyzes the relationship between the viewing zones and the AEIAs and calculates the shift of the AEIA. Section 2.3 proposes the generation method for the AEIAs.

2.1 Comparison of viewing zones in conventional tracking based II system and POVZ II system

In the conventional tracking based II system, as shown in Fig. 2(a), the EIA is updated according to the tracking result in real time to make sure that the viewer is always located at the central position of the viewing zone [6].So the viewing angle of the conventional tracking based II system θvc is limited by the tracking device’s largest tracking angle θtr and the viewing angle of the conventional II system θ0:

θvcθtr+θ0.
And most of the conventional tracking based II systems use the tracked viewer’s position to change the EIA in real time [7], so when the viewer moves out of the tracking region, and if the system does not record the viewer’s last position information, the system cannot display the EIA exactly. In this case, the viewing angle θvc is no more than the largest tracking angle θtr [6]. What's more, because of the limitation of tracking device’s response time and accuracy, time delay will affect the viewing effect when the viewer moves fast.

 figure: Fig. 2

Fig. 2 Comparison of viewing zones between (a) the conventional tracking based II system and (b) the proposed POVZ II system.

Download Full Size | PPT Slide | PDF

In our tracking based II system, by using the POVZ, the viewing angle θv can be wider than the viewing angle of the conventional tracking based II system θvc. As shown in Fig. 2(b), while the viewer moves out of the tracking region, the proposed system optimizes the viewing zone according to the last tracked information, and almost all region of the viewing zone is arranged out of the tracking region, so the viewer can watch the 3D images in a wider angle without tracking.

As shown in Fig. 2(b), viewing space is divided into several viewing zones in horizontal and vertical directions and coded by Vi, j. V0, 0 is the viewing zone of the conventional II system and regarded as the original viewing zone in our II system. The boundary of Vi, j can be decided by V0, 0 with a certain shift. The adjacent Vi, j and Vi-1, j have a partially-overlapped part denoted as Pi, j_i-1, j, as shown in Fig. 2(b). Pi, j_i-1, j reconstructs the same 3D scene in the overlapped zone of Vi, j and Vi-1, j. The proportion of Pi, j_i-1, j in Vi, j is denoted as the overlapped coefficient of viewing zone Vi, j which determines the reach size of the POVZ. The overlapped coefficient is a variable that decreases gradually from the center of the viewing space to the edge. The overlapped coefficient of V0, 0 is the largest coefficient and denoted by initial overlapped coefficient t.

In the proposed POVZ system, each viewing zone has a corresponding AEIA, and the AEIA of Vi, j is denoted as Ai, j. The adjacent viewing zones are segregated by the angular bisector of the angle range of Pi, j_i-1, j (dotted red lines in Fig. 2(b)) which serves as a trigger line to send a signal to update the AEIA when the viewer moves from one viewing zone to the adjacent one.

The tracking device detects the viewer’s position in viewing space in real time. When the viewer moves to Vi, j and arrives at the trigger line in Pi, j_i-1, j, the AEIA is changed from A i-1, j to Ai, j. When the viewer moves out of the maximum tracking angle, the AEIA keeps un-changed. Because almost all region of Vi, j, not half region of Vi, j, is arranged out of the tracking region, the viewing angle θv in the proposed system is wider than θvc in the conventional tracking based II system. The viewing angle θv can be calculated by

θvθtr+2θ0εmin,
where εmin is the angle range of the most marginal overlapped viewing zone Pi, j_i-1, j. In our system, εmin is a small value, and θ0 is the viewing angle of V0, 0 in the POVZ II system. Due to the certain width of the overlapped viewing zone, even though the viewer moves from one viewing zone to the adjacent one rapidly, the display system has time to change the AEIA and the viewer will observe the continuous 3D scene without any flipping and time delay.

2.2 Relationship between the viewing zones and the AEIAs in the POVZ II system

In the proposed system, Vi, j can be decided by the corresponding Ai, j and Ai, j has a corresponding content updates and pixel shift Δni, j comparing to A0, 0. In the POVZ II system, the AEIA will be changed only if the viewer arrives at the trigger lines.

As shown in Fig. 3, we assume that the origin point O is at the center of the lens array, and the tracking device obtains the viewer’s 3D position as P(x, y, z) in real time. So the tracked viewer’s angle θ in the viewing space can be deduced as:

θ=(arctan(xz),arctan(yz)).
Assume that each elemental image has u × v pixels, and its size is rh × rv. According to the viewer’s position, we can obtain the information of viewing zone Vi, j where the viewer is by the following equations:
i={round(g(1th)rhxz)|x|<xmax=ztan(θtr2)round(g(1th)rhtan(θtr2))|x|xmax,
j={round(g(1tv)rvyz)|y|<ymax=ztan(θtr2)round(g(1tv)rvtan(θtr2))|y|ymax,
where g is the gap between the AEIA and the lens array, and th and tv are the initial overlapped coefficients of A0, 0 and A-1, 0 in the horizontal and vertical directions, respectively. th and tv can be expressed as the proportion of the overlapped pixels in the elemental images in A0, 0 and A-1, 0, as shown in Fig. 3. They are both within the range of (0, 1). The maximum tracking region is (-xmax, xmax) and (-ymax, ymax) at the viewing distance z.

 figure: Fig. 3

Fig. 3 Relationship between the viewing zones and the AEIAs in the POVZ II system.

Download Full Size | PPT Slide | PDF

In our system, Ai, j reconstructs the 3D images within the viewing zone Vi, j. In order to make sure that almost all region of the viewing zone is out of the tracking region when the viewer moves to the tracking boundary, each viewing zone Vi, j has a shift exactly. The movement of Vi, j can be decided by the pixel shift Δni, j of Ai, j. The pixel shift Δni, j includes a conventional shift (Δni, j)c and an additional shift (Δni, j)a. The former is same with the pixel shift in conventional tracking based II system which ensures the viewer always located at the center of the viewing zone. The latter ensures the viewer located at an off-center position in the corresponding viewing zone. As shown in Fig. 3, the viewer is located at P(x, y, z) and the corresponding Ai, j has a pixel shift Δni, j. The conventional shift (Δni, j)c moves the viewing zone to the viewer’s position; and the additional shift (Δni, j)a allows the viewing zone has an additional movement which contributes to the wider viewing angle. Combining the conventional and additional shift, Δni, j in the horizontal and vertical directions is denoted by

Δni,j=(Δni,j)c+(Δni,j)a=((Δμi,j)c+(Δμi,j)a,(Δλi,j)c+(Δλi,j)a),
where (Δμi, j)c, (Δμi, j)a, (Δλi, j)c and (Δλi, j)a are the conventional shift and additional shift in the horizontal and vertical directions, respectively, and can be deduced as:
(Δμi,j)c=i(1th),
(Δμi,j)a=round(urh(1th)2gtan(θtr2)i),
(Δλi,j)c=j(1tv),
(Δλi,j)a=round(vrv(1tv)2gtan(θtr2)j),
where (i, j) is decided by the tracked viewer’s position and the parameters of the II system. With the pixel shift Δni, j the region of Vi, j can be determined. From Eqs. (4)(10) we can know the relationship between the viewing zones and the corresponding AEIA.

2.3 Generation method for AEIA of the proposed POVZ II system

In the proposed POVZ II system, we improve the viewpoint vector rendering (VVR) [23, 24] method to obtain the AIEAs efficiently. As shown in Fig. 4, after arranging the 3D scene in advance, we set a virtual camera array to pick up the 3D information, and each camera has an orthographic geometry. The number of the virtual cameras is just equal to the number of pixels in each elemental image of the AEIA. We suppose the number of micro-lens in lens array is M × N, and the size of elemental image is u × v pixels. As shown in Fig. 4, in horizontal direction, u virtual cameras are needed to obtain the u orthographic projection images. Then the orthographic projection images are interleaved to generate A0, 0 based on VVR method.

 figure: Fig. 4

Fig. 4 Generation process for AEIAs of the proposed POVZ II system.

Download Full Size | PPT Slide | PDF

We can obtain Ai, j for Vi, j as shown in Fig. 4. The virtual camera array for Ai, j has a specific shift ΔDi, j comparing to A0, 0, but both of them have the same convergent point Pcon. The shift ΔDi, j of the virtual camera array also includes the horizontal and vertical shifts in order to pick up the wider angle parallax images. The shift ΔDi, j can be determined by the pixel shift Δni, j of Ai, j:

ΔDi,j=(((Δμi,j)c+(Δμi,j)a)d,((Δλi,j)c+(Δλi,j)a)d),
where d is the distance between the adjacent cameras in the virtual camera array both in horizontal and vertical directions.

We get the u × v orthographic projection images for Ai, j, and in the m′-th column and the n′-th row orthographic projection image, the pixel in the m-th column and the n-th row is denoted as I(m, n)m′, n. The pixel I(m, n)m′, n is mapped to the p-th column and the q-th row pixel in Ai, j which is denoted as Ii, j(p, q), as shown in Fig. 4. Thus, we can obtain Eq. (12) as:

Ii,j(p,q)=I(m,n)m,n.
And the relationship between p, q, m, n, m′ and n′ in Eq. (12) can be obtained by
p=(m+1)×um1+Δcμi,j+Δaμi,j,
q=(n+1)×vn1+Δcλi,j+Δaλi,j.
In this way, loop m′ from 0 to u-1, n′ from 0 to v-1, m from 0 to M-1, and n from 0 to N-1, all the pixels in the orthographic projection images are mapped to Ai, j. Moreover, the proposed pixel mapping is processed in parallel by GPU which uses compute unified device architecture (CUDA) as the development environment to achieve real-time display. From this section, we can know the acquisition and the pixel mapping in the generation process of the AEIAs.

3. Experiments and results

In our experiment, we use a Kinect® as the tracking device to obtain the viewer’s head 3D position [25]. The experimental setup is shown in Fig. 5.

 figure: Fig. 5

Fig. 5 Experimental setup of the II system with super wide viewing angle.

Download Full Size | PPT Slide | PDF

The proposed II system is configured with the specification in Table 1. We use the pinhole array instead of the lens array. Each elemental image contains 13 × 13 pixels which are covered by one elemental pinhole. So we build 13 × 13 virtual cameras to pick up the AEIAs.

Tables Icon

Table 1. Configuration parameters and experiment environment of the proposed II system

In our experiment, we build up a “man head” as the 3D scene and the central depth plane is located at the center of the head as shown in Fig. 6.The viewing angle of the conventional tracking based II display is 57°( ± 2 8. 5°) which is equal to the maximum tracking angle of Kinect when the viewing distance is about 3.1m.

 figure: Fig. 6

Fig. 6 3D scene built in experiments.

Download Full Size | PPT Slide | PDF

When the viewer’s position is tracked, the AEIAs are obtained and two of them are shown in Figs. 7(a) and 7(b). A0, 0 is obtained when the viewer’s position is P1(0.0m, 0.1m, 3.1m) with the viewing angle of 0° in the viewing space, and A3, 0 is captured as the result when the viewer moves to P2(3.7m, 0.1m, 2.4m) with the viewing angle of about 55°. Simultaneously, the virtual camera array has a shift of (18 × 5.9)mm in horizontal direction according to Eq. (11). The viewing zones are shown in Figs. 7(c) and 7(d), and we can see that the system displays A3, 0 as the most marginal AEIA when the viewing angle is out 27.5°.

 figure: Fig. 7

Fig. 7 AEIAs and viewing zones (a) the A0, 0 and corresponding elemental images, (b) the A3, 0 and corresponding elemental images, (c) the region of V0, 0, (d) the region of V3, 0.

Download Full Size | PPT Slide | PDF

In our experiment, each elemental image in A-1, 0 has 9 overlapped pixels with the corresponding elemental image in A0, 0, so the initial overlapped coefficient t is 9/13 in the horizontal direction. The viewing space is divided into seven viewing zones in the horizontal directions and there are seven corresponding AEIAs. Generally, the region of Vi, j, the pixel shift Δni, j of AEIAi, j, the corresponding shift of virtual camera array and the trigger angle of the adjacent viewing zone are shown in Table 2 in detail.

Tables Icon

Table 2. Region of POVZ and the trigger angle in experiment

When the viewer moves in front of the II display, the images from different positions are captured, as shown in Fig. 8.By practical measurement, the maximum viewing angle without flipping effect in the conventional II system is only 35°, as shown in Figs. 8(a)8(c). But in the proposed POVZ II system the maximum viewing angle without flipping effect is 120°( ± 60°), as shown in Figs. 8(d)8(h). Due to the proposed POVZ, the super wide viewing angle can be achieved. The flipping and time delay of the 3D scene when the viewer moves fast are further reduced by the good performance of the response time and accuracy of the tracking device.

 figure: Fig. 8

Fig. 8 Viewing angle of the conventional II system and movie (Media 1): (a) leftmost view, (b) middle view, (c) rightmost view; and viewing angle of the POVZ II system and movie (Media 2): (d) leftmost view, (f) middle view, (h) rightmost view, (e) and (g) comparison to the conventional II viewing angle.

Download Full Size | PPT Slide | PDF

4. Conclusion

An II system based on POVZ is proposed to enhance the viewing angle effectively without flipping and time delay even if the viewer moves quickly. The POVZs allot the viewing space according to the relationship between AEIAs and viewing zones. And the generation method for AEAs of the POVZ is also proposed. In the experiment, the viewing angle of the II system is 120° without flipping effect which is over twice of the maximum tracking angle of the Kinect. In addition, the viewing angle of our II system can be extended with a tracking device having better performance. Applying the POVZ for each viewer in the multi-viewer tracking II system, it may be possible to display the 3D images for each viewer with super wide viewing angle.

Acknowledgments

The work is supported by the “973” Program under Grant No. 2013CB328802, the NSFC under Grant Nos. 61225022 and 61320106015, and the “863” Program under Grant Nos. 2012AA011901 and 2012AA03A301.

References and links

1. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

2. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef]   [PubMed]  

3. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

4. N. A. Dodgson, “Analysis of the viewing zone of the Cambridge autostereoscopic display,” Appl. Opt. 35(10), 1705–1710 (1996). [CrossRef]   [PubMed]  

5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

6. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (Optical Society of America, 2009), DWB27.

7. G. Park, J. H. Jung, K. Hong, Y. Kim, Y. H. Kim, S. W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009). [CrossRef]   [PubMed]  

8. D. C. Hwang, J. S. Park, S. C. Kim, D. H. Shin, and E. S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef]   [PubMed]  

9. C. C. Ji, H. Deng, and Q. H. Wang, “Pixel extraction based integral imaging with controllable viewing direction,” J. Opt. 14(9), 095401 (2012). [CrossRef]  

10. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012). [CrossRef]   [PubMed]  

11. S. H. Jiao, X. G. Wang, M. C. Zhou, W. M. Li, T. Hong, D. Nam, J. H. Lee, E. H. Wu, H. T. Wang, and J. Y. Kim, “Multiple ray cluster rendering for interactive integral imaging system,” Opt. Express 21(8), 10070–10086 (2013). [CrossRef]   [PubMed]  

12. Y. Kim, J. H. Park, H. Choi, S. Jung, S. W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004). [CrossRef]   [PubMed]  

13. G. Baasantseren, J. H. Park, K. C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express 17(16), 14405–14417 (2009). [CrossRef]   [PubMed]  

14. B. Lee, S. Jung, and J. H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27(10), 818–820 (2002). [CrossRef]   [PubMed]  

15. S. Jung, J. H. Park, H. Choi, and B. Lee, “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching,” Appl. Opt. 42(14), 2513–2520 (2003). [CrossRef]   [PubMed]  

16. H. Choi, J. H. Park, J. Kim, S. W. Cho, and B. Lee, “Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array,” Opt. Express 13(21), 8424–8432 (2005). [CrossRef]   [PubMed]  

17. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007). [CrossRef]   [PubMed]  

18. Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011). [CrossRef]   [PubMed]  

19. R. Taherkhani and K. Mohammad, “Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor,” 3D Res. 3.3, 1–7 (2012).

20. C. C. Smyth, “Apparatus for tracking the human eye with a retinal scanning display, and method thereof,” U.S. Patent No. 6, 120, 461. 19 Sep. 2000.

21. J. Nakamura, T. Takahashi, and Y. Takaki, “Enlargement of viewing freedom of reduced-view SMV display,” in IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics (2012).

22. J. C. Yang, C. S. Wu, C. H. Hsiao, R. Y. Tsai, and Y. P. Hung, “Evaluation of an eye tracking technology for 3D display applications,” in 3DTV Conference (2008), p. 345. [CrossRef]  

23. K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007).

24. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012). [CrossRef]   [PubMed]  

25. Kinect, http://www.kinectfordevelopers.com/. Kinect is a registered trademark of Microsoft Corporation in the United States and/or other countries.

References

  • View by:

  1. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).
  2. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011).
    [Crossref] [PubMed]
  3. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013).
    [Crossref] [PubMed]
  4. N. A. Dodgson, “Analysis of the viewing zone of the Cambridge autostereoscopic display,” Appl. Opt. 35(10), 1705–1710 (1996).
    [Crossref] [PubMed]
  5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
    [Crossref] [PubMed]
  6. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (Optical Society of America, 2009), DWB27.
  7. G. Park, J. H. Jung, K. Hong, Y. Kim, Y. H. Kim, S. W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009).
    [Crossref] [PubMed]
  8. D. C. Hwang, J. S. Park, S. C. Kim, D. H. Shin, and E. S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006).
    [Crossref] [PubMed]
  9. C. C. Ji, H. Deng, and Q. H. Wang, “Pixel extraction based integral imaging with controllable viewing direction,” J. Opt. 14(9), 095401 (2012).
    [Crossref]
  10. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012).
    [Crossref] [PubMed]
  11. S. H. Jiao, X. G. Wang, M. C. Zhou, W. M. Li, T. Hong, D. Nam, J. H. Lee, E. H. Wu, H. T. Wang, and J. Y. Kim, “Multiple ray cluster rendering for interactive integral imaging system,” Opt. Express 21(8), 10070–10086 (2013).
    [Crossref] [PubMed]
  12. Y. Kim, J. H. Park, H. Choi, S. Jung, S. W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004).
    [Crossref] [PubMed]
  13. G. Baasantseren, J. H. Park, K. C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express 17(16), 14405–14417 (2009).
    [Crossref] [PubMed]
  14. B. Lee, S. Jung, and J. H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27(10), 818–820 (2002).
    [Crossref] [PubMed]
  15. S. Jung, J. H. Park, H. Choi, and B. Lee, “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching,” Appl. Opt. 42(14), 2513–2520 (2003).
    [Crossref] [PubMed]
  16. H. Choi, J. H. Park, J. Kim, S. W. Cho, and B. Lee, “Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array,” Opt. Express 13(21), 8424–8432 (2005).
    [Crossref] [PubMed]
  17. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007).
    [Crossref] [PubMed]
  18. Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
    [Crossref] [PubMed]
  19. R. Taherkhani and K. Mohammad, “Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor,” 3D Res. 3.3, 1–7 (2012).
  20. C. C. Smyth, “Apparatus for tracking the human eye with a retinal scanning display, and method thereof,” U.S. Patent No. 6, 120, 461. 19 Sep. 2000.
  21. J. Nakamura, T. Takahashi, and Y. Takaki, “Enlargement of viewing freedom of reduced-view SMV display,” in IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics (2012).
  22. J. C. Yang, C. S. Wu, C. H. Hsiao, R. Y. Tsai, and Y. P. Hung, “Evaluation of an eye tracking technology for 3D display applications,” in 3DTV Conference (2008), p. 345.
    [Crossref]
  23. K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007).
  24. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012).
    [Crossref] [PubMed]
  25. Kinect, http://www.kinectfordevelopers.com/ . Kinect is a registered trademark of Microsoft Corporation in the United States and/or other countries.

2013 (2)

2012 (4)

2011 (2)

2009 (2)

2007 (2)

R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007).
[Crossref] [PubMed]

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007).

2006 (1)

2005 (1)

2004 (1)

2003 (1)

2002 (1)

1997 (1)

1996 (1)

1908 (1)

G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

Arai, J.

Baasantseren, G.

Chen, N.

Cho, S. W.

Cho, Y.

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007).

Choi, H.

Choi, H. J.

Choi, J. H.

Deng, H.

C. C. Ji, H. Deng, and Q. H. Wang, “Pixel extraction based integral imaging with controllable viewing direction,” J. Opt. 14(9), 095401 (2012).
[Crossref]

Dodgson, N. A.

Erdenebat, M. U.

Hahn, J.

Hong, J.

Hong, K.

Hong, T.

Hoshino, H.

Hwang, D. C.

Javidi, B.

Jeong, J. S.

Ji, C. C.

C. C. Ji, H. Deng, and Q. H. Wang, “Pixel extraction based integral imaging with controllable viewing direction,” J. Opt. 14(9), 095401 (2012).
[Crossref]

Jiao, S. H.

Jung, J. H.

Jung, S.

Kim, E. S.

Kim, H.

Kim, J.

Kim, J. Y.

Kim, N.

Kim, S. C.

Kim, Y.

Kim, Y. H.

Kwon, K. C.

Lee, B.

Lee, J. H.

Li, W. M.

Lim, Y. T.

Lippmann, G.

G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

Martinez-Corral, M.

Martínez-Corral, M.

Martínez-Cuenca, R.

Min, S. W.

Mohammad, K.

R. Taherkhani and K. Mohammad, “Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor,” 3D Res. 3.3, 1–7 (2012).

Nakamura, J.

Nam, D.

Navarro, H.

Okano, F.

Park, C.

Park, G.

Park, J. H.

K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012).
[Crossref] [PubMed]

K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012).
[Crossref] [PubMed]

J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011).
[Crossref] [PubMed]

G. Baasantseren, J. H. Park, K. C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express 17(16), 14405–14417 (2009).
[Crossref] [PubMed]

H. Choi, J. H. Park, J. Kim, S. W. Cho, and B. Lee, “Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array,” Opt. Express 13(21), 8424–8432 (2005).
[Crossref] [PubMed]

Y. Kim, J. H. Park, H. Choi, S. Jung, S. W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004).
[Crossref] [PubMed]

S. Jung, J. H. Park, H. Choi, and B. Lee, “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching,” Appl. Opt. 42(14), 2513–2520 (2003).
[Crossref] [PubMed]

B. Lee, S. Jung, and J. H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27(10), 818–820 (2002).
[Crossref] [PubMed]

Park, J. S.

Park, K. S.

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007).

Saavedra, G.

Shin, D. H.

Stern, A.

Taherkhani, R.

R. Taherkhani and K. Mohammad, “Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor,” 3D Res. 3.3, 1–7 (2012).

Takaki, Y.

Tanaka, K.

Wang, H. T.

Wang, Q. H.

C. C. Ji, H. Deng, and Q. H. Wang, “Pixel extraction based integral imaging with controllable viewing direction,” J. Opt. 14(9), 095401 (2012).
[Crossref]

Wang, X. G.

Wu, E. H.

Xiao, X.

Yoo, K. H.

Yuyama, I.

Zhou, M. C.

3D Res. (1)

R. Taherkhani and K. Mohammad, “Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor,” 3D Res. 3.3, 1–7 (2012).

Appl. Opt. (6)

C. R. Acad. Sci. (1)

G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

IEICE Trans. Inf. Syst. E (1)

K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007).

J. Opt. (1)

C. C. Ji, H. Deng, and Q. H. Wang, “Pixel extraction based integral imaging with controllable viewing direction,” J. Opt. 14(9), 095401 (2012).
[Crossref]

Opt. Express (9)

K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012).
[Crossref] [PubMed]

S. H. Jiao, X. G. Wang, M. C. Zhou, W. M. Li, T. Hong, D. Nam, J. H. Lee, E. H. Wu, H. T. Wang, and J. Y. Kim, “Multiple ray cluster rendering for interactive integral imaging system,” Opt. Express 21(8), 10070–10086 (2013).
[Crossref] [PubMed]

Y. Kim, J. H. Park, H. Choi, S. Jung, S. W. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12(3), 421–429 (2004).
[Crossref] [PubMed]

G. Baasantseren, J. H. Park, K. C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express 17(16), 14405–14417 (2009).
[Crossref] [PubMed]

H. Choi, J. H. Park, J. Kim, S. W. Cho, and B. Lee, “Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array,” Opt. Express 13(21), 8424–8432 (2005).
[Crossref] [PubMed]

R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007).
[Crossref] [PubMed]

Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view display with a lower resolution flat-panel display,” Opt. Express 19(5), 4129–4139 (2011).
[Crossref] [PubMed]

K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012).
[Crossref] [PubMed]

G. Park, J. H. Jung, K. Hong, Y. Kim, Y. H. Kim, S. W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009).
[Crossref] [PubMed]

Opt. Lett. (1)

Other (5)

G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (Optical Society of America, 2009), DWB27.

C. C. Smyth, “Apparatus for tracking the human eye with a retinal scanning display, and method thereof,” U.S. Patent No. 6, 120, 461. 19 Sep. 2000.

J. Nakamura, T. Takahashi, and Y. Takaki, “Enlargement of viewing freedom of reduced-view SMV display,” in IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics (2012).

J. C. Yang, C. S. Wu, C. H. Hsiao, R. Y. Tsai, and Y. P. Hung, “Evaluation of an eye tracking technology for 3D display applications,” in 3DTV Conference (2008), p. 345.
[Crossref]

Kinect, http://www.kinectfordevelopers.com/ . Kinect is a registered trademark of Microsoft Corporation in the United States and/or other countries.

Supplementary Material (2)

Media 1: MOV (969 KB)     
Media 2: MOV (3399 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Architecture of the proposed POVZ based II system.
Fig. 2
Fig. 2 Comparison of viewing zones between (a) the conventional tracking based II system and (b) the proposed POVZ II system.
Fig. 3
Fig. 3 Relationship between the viewing zones and the AEIAs in the POVZ II system.
Fig. 4
Fig. 4 Generation process for AEIAs of the proposed POVZ II system.
Fig. 5
Fig. 5 Experimental setup of the II system with super wide viewing angle.
Fig. 6
Fig. 6 3D scene built in experiments.
Fig. 7
Fig. 7 AEIAs and viewing zones (a) the A0, 0 and corresponding elemental images, (b) the A3, 0 and corresponding elemental images, (c) the region of V0, 0, (d) the region of V3, 0.
Fig. 8
Fig. 8 Viewing angle of the conventional II system and movie (Media 1): (a) leftmost view, (b) middle view, (c) rightmost view; and viewing angle of the POVZ II system and movie (Media 2): (d) leftmost view, (f) middle view, (h) rightmost view, (e) and (g) comparison to the conventional II viewing angle.

Tables (2)

Tables Icon

Table 1 Configuration parameters and experiment environment of the proposed II system

Tables Icon

Table 2 Region of POVZ and the trigger angle in experiment

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

θ v c θ t r + θ 0 .
θ v θ t r + 2 θ 0 ε min ,
θ = ( arc tan ( x z ) , arc tan ( y z ) ) .
i = { r o u n d ( g ( 1 t h ) r h x z ) | x | < x max = z tan ( θ t r 2 ) r o u n d ( g ( 1 t h ) r h tan ( θ t r 2 ) ) | x | x max ,
j = { r o u n d ( g ( 1 t v ) r v y z ) | y | < y max = z tan ( θ t r 2 ) r o u n d ( g ( 1 t v ) r v tan ( θ t r 2 ) ) | y | y max ,
Δ n i , j = ( Δ n i , j ) c + ( Δ n i , j ) a = ( ( Δ μ i , j ) c + ( Δ μ i , j ) a , ( Δ λ i , j ) c + ( Δ λ i , j ) a ) ,
( Δ μ i , j ) c = i ( 1 t h ) ,
( Δ μ i , j ) a = r o u n d ( u r h ( 1 t h ) 2 g tan ( θ t r 2 ) i ) ,
( Δ λ i , j ) c = j ( 1 t v ) ,
( Δ λ i , j ) a = r o u n d ( v r v ( 1 t v ) 2 g tan ( θ t r 2 ) j ) ,
Δ D i , j = ( ( ( Δ μ i , j ) c + ( Δ μ i , j ) a ) d , ( ( Δ λ i , j ) c + ( Δ λ i , j ) a ) d ) ,
I i , j ( p , q ) = I ( m , n ) m , n .
p = ( m + 1 ) × u m 1 + Δ c μ i , j + Δ a μ i , j ,
q = ( n + 1 ) × v n 1 + Δ c λ i , j + Δ a λ i , j .

Metrics