In this paper, we analyze the relationship between viewer and viewing zones of integral imaging (II) system and present a partially-overlapped viewing zone (POVZ) based integral imaging system with a super wide viewing angle. In the proposed system, the viewing angle can be wider than the viewing angle of the conventional tracking based II system. In addition, the POVZ can eliminate the flipping and time delay of the 3D scene as well. The proposed II system has a super wide viewing angle of 120° without flipping effect about twice as wide as the conventional one.
© 2014 Optical Society of America
Integral imaging (II) can reconstruct the true 3D images without glasses and provides both horizontal and vertical parallaxes with continuous views [1–4]. However, there are still some problems, such as the limitations of 3D image resolution, depth range, and viewing angle which delay the practical application of II. In the past decades, many researchers have focused on solving these problems [5–8], and many technologies have been proposed, including computer graphic technology, head and eye tracking technology, and so on.
In this paper, we focus on the viewing angle issue of an II system and use computer-generated integral imaging (CGII) to capture the elemental images of 3D scene [9–11]. The viewing angle is defined as the scope that the viewer can observe the 3D images without obvious imperfection (such as flipping, etc.). In the conventional II system, viewers can watch 3D images in a very narrow region and the flipping images are observed within a slightly bigger angle. Many researchers focus on modifying optical structures of the II system [12–18]. The viewing angle is indeed enhanced by using a curved lens array , two elemental image masks , lens switching  and so on. But some of the structures are not practical because it is difficult to fabricate these specific devices. With the development of tracking technology, some researchers use head or eyes tracking technology to enhance the viewing angle. These works are remarkable and good performances are obtained [19–22]. A viewer’s position is obtained by head or eyes tracking and the tracking results are used for generating elemental images in proper positions. A great contribution has been made by Gilbae Park et al. to enhance the viewing angle of the II system based on the head tracking [6, 7]. In their methods, the viewer is always located at the central position of the viewing zone. So the viewing angle is limited by the maximum tracking angle of the tracking device . And the flipping and time delay of the reconstructed 3D scene may occur when the viewer moves fast.
In this paper, we propose a partially-overlapped viewing zone (POVZ) based II system with a super wide viewing angle which consists of a conventional II system and a tracking device. In the POVZ II system, the viewing zones are rearranged within 120° to eliminate the flipping 3D images in crosstalk zones of the conventional II system. Besides, the flipping and time delay issues in the conventional tracking based II system are also eliminated. In the POVZ II system, a tracking device obtains the viewer’s 3D position, and the system generates a corresponding adaptive elemental image array (AEIA) which reconstructs the 3D scene based on the tracking information in real time. Then we introduce the generation method for AEIA of the POVZ based on the viewer’s position. In the experiment, we build the POVZ II system with a super wide viewing angle of 120°, which is due to the region of lack of obvious imperfection.
2. Principle of the proposed POVZ II system
As shown in Fig. 1, the architecture of the proposed system is composed of four parts: the input and tracking part includes the parameter and 3D scene data input and the tracking of the viewer’s position, the calculation part obtains the POVZ and virtual camera array information, the pixel mapping process generates the AEIA based on the parallax images, and the display process displays the AEIA for the viewer.
In the paper, section 2.1 explains the difference of principles between the proposed and conventional system. Then we determine the region of the POVZ. Section 2.2 analyzes the relationship between the viewing zones and the AEIAs and calculates the shift of the AEIA. Section 2.3 proposes the generation method for the AEIAs.
2.1 Comparison of viewing zones in conventional tracking based II system and POVZ II system
In the conventional tracking based II system, as shown in Fig. 2(a), the EIA is updated according to the tracking result in real time to make sure that the viewer is always located at the central position of the viewing zone .So the viewing angle of the conventional tracking based II system θvc is limited by the tracking device’s largest tracking angle θtr and the viewing angle of the conventional II system θ0:7], so when the viewer moves out of the tracking region, and if the system does not record the viewer’s last position information, the system cannot display the EIA exactly. In this case, the viewing angle θvc is no more than the largest tracking angle θtr . What's more, because of the limitation of tracking device’s response time and accuracy, time delay will affect the viewing effect when the viewer moves fast.
In our tracking based II system, by using the POVZ, the viewing angle θv can be wider than the viewing angle of the conventional tracking based II system θvc. As shown in Fig. 2(b), while the viewer moves out of the tracking region, the proposed system optimizes the viewing zone according to the last tracked information, and almost all region of the viewing zone is arranged out of the tracking region, so the viewer can watch the 3D images in a wider angle without tracking.
As shown in Fig. 2(b), viewing space is divided into several viewing zones in horizontal and vertical directions and coded by Vi, j. V0, 0 is the viewing zone of the conventional II system and regarded as the original viewing zone in our II system. The boundary of Vi, j can be decided by V0, 0 with a certain shift. The adjacent Vi, j and Vi-1, j have a partially-overlapped part denoted as Pi, j_i-1, j, as shown in Fig. 2(b). Pi, j_i-1, j reconstructs the same 3D scene in the overlapped zone of Vi, j and Vi-1, j. The proportion of Pi, j_i-1, j in Vi, j is denoted as the overlapped coefficient of viewing zone Vi, j which determines the reach size of the POVZ. The overlapped coefficient is a variable that decreases gradually from the center of the viewing space to the edge. The overlapped coefficient of V0, 0 is the largest coefficient and denoted by initial overlapped coefficient t.
In the proposed POVZ system, each viewing zone has a corresponding AEIA, and the AEIA of Vi, j is denoted as Ai, j. The adjacent viewing zones are segregated by the angular bisector of the angle range of Pi, j_i-1, j (dotted red lines in Fig. 2(b)) which serves as a trigger line to send a signal to update the AEIA when the viewer moves from one viewing zone to the adjacent one.
The tracking device detects the viewer’s position in viewing space in real time. When the viewer moves to Vi, j and arrives at the trigger line in Pi, j_i-1, j, the AEIA is changed from A i-1, j to Ai, j. When the viewer moves out of the maximum tracking angle, the AEIA keeps un-changed. Because almost all region of Vi, j, not half region of Vi, j, is arranged out of the tracking region, the viewing angle θv in the proposed system is wider than θvc in the conventional tracking based II system. The viewing angle θv can be calculated by
2.2 Relationship between the viewing zones and the AEIAs in the POVZ II system
In the proposed system, Vi, j can be decided by the corresponding Ai, j and Ai, j has a corresponding content updates and pixel shift Δni, j comparing to A0, 0. In the POVZ II system, the AEIA will be changed only if the viewer arrives at the trigger lines.
As shown in Fig. 3, we assume that the origin point O is at the center of the lens array, and the tracking device obtains the viewer’s 3D position as P(x, y, z) in real time. So the tracked viewer’s angle θ in the viewing space can be deduced as:Fig. 3. They are both within the range of (0, 1). The maximum tracking region is (-xmax, xmax) and (-ymax, ymax) at the viewing distance z.
In our system, Ai, j reconstructs the 3D images within the viewing zone Vi, j. In order to make sure that almost all region of the viewing zone is out of the tracking region when the viewer moves to the tracking boundary, each viewing zone Vi, j has a shift exactly. The movement of Vi, j can be decided by the pixel shift Δni, j of Ai, j. The pixel shift Δni, j includes a conventional shift (Δni, j)c and an additional shift (Δni, j)a. The former is same with the pixel shift in conventional tracking based II system which ensures the viewer always located at the center of the viewing zone. The latter ensures the viewer located at an off-center position in the corresponding viewing zone. As shown in Fig. 3, the viewer is located at P(x, y, z) and the corresponding Ai, j has a pixel shift Δni, j. The conventional shift (Δni, j)c moves the viewing zone to the viewer’s position; and the additional shift (Δni, j)a allows the viewing zone has an additional movement which contributes to the wider viewing angle. Combining the conventional and additional shift, Δni, j in the horizontal and vertical directions is denoted byEqs. (4)–(10) we can know the relationship between the viewing zones and the corresponding AEIA.
2.3 Generation method for AEIA of the proposed POVZ II system
In the proposed POVZ II system, we improve the viewpoint vector rendering (VVR) [23, 24] method to obtain the AIEAs efficiently. As shown in Fig. 4, after arranging the 3D scene in advance, we set a virtual camera array to pick up the 3D information, and each camera has an orthographic geometry. The number of the virtual cameras is just equal to the number of pixels in each elemental image of the AEIA. We suppose the number of micro-lens in lens array is M × N, and the size of elemental image is u × v pixels. As shown in Fig. 4, in horizontal direction, u virtual cameras are needed to obtain the u orthographic projection images. Then the orthographic projection images are interleaved to generate A0, 0 based on VVR method.
We can obtain Ai, j for Vi, j as shown in Fig. 4. The virtual camera array for Ai, j has a specific shift ΔDi, j comparing to A0, 0, but both of them have the same convergent point Pcon. The shift ΔDi, j of the virtual camera array also includes the horizontal and vertical shifts in order to pick up the wider angle parallax images. The shift ΔDi, j can be determined by the pixel shift Δni, j of Ai, j:
We get the u × v orthographic projection images for Ai, j, and in the m′-th column and the n′-th row orthographic projection image, the pixel in the m-th column and the n-th row is denoted as I(m, n)m′, n′. The pixel I(m, n)m′, n′ is mapped to the p-th column and the q-th row pixel in Ai, j which is denoted as I′i, j(p, q), as shown in Fig. 4. Thus, we can obtain Eq. (12) as:Eq. (12) can be obtained by
3. Experiments and results
The proposed II system is configured with the specification in Table 1. We use the pinhole array instead of the lens array. Each elemental image contains 13 × 13 pixels which are covered by one elemental pinhole. So we build 13 × 13 virtual cameras to pick up the AEIAs.
In our experiment, we build up a “man head” as the 3D scene and the central depth plane is located at the center of the head as shown in Fig. 6.The viewing angle of the conventional tracking based II display is 57°( ± 2 8. 5°) which is equal to the maximum tracking angle of Kinect when the viewing distance is about 3.1m.
When the viewer’s position is tracked, the AEIAs are obtained and two of them are shown in Figs. 7(a) and 7(b). A0, 0 is obtained when the viewer’s position is P1(0.0m, 0.1m, 3.1m) with the viewing angle of 0° in the viewing space, and A3, 0 is captured as the result when the viewer moves to P2(3.7m, 0.1m, 2.4m) with the viewing angle of about 55°. Simultaneously, the virtual camera array has a shift of (18 × 5.9)mm in horizontal direction according to Eq. (11). The viewing zones are shown in Figs. 7(c) and 7(d), and we can see that the system displays A3, 0 as the most marginal AEIA when the viewing angle is out 27.5°.
In our experiment, each elemental image in A-1, 0 has 9 overlapped pixels with the corresponding elemental image in A0, 0, so the initial overlapped coefficient t is 9/13 in the horizontal direction. The viewing space is divided into seven viewing zones in the horizontal directions and there are seven corresponding AEIAs. Generally, the region of Vi, j, the pixel shift Δni, j of AEIAi, j, the corresponding shift of virtual camera array and the trigger angle of the adjacent viewing zone are shown in Table 2 in detail.
When the viewer moves in front of the II display, the images from different positions are captured, as shown in Fig. 8.By practical measurement, the maximum viewing angle without flipping effect in the conventional II system is only 35°, as shown in Figs. 8(a)–8(c). But in the proposed POVZ II system the maximum viewing angle without flipping effect is 120°( ± 60°), as shown in Figs. 8(d)–8(h). Due to the proposed POVZ, the super wide viewing angle can be achieved. The flipping and time delay of the 3D scene when the viewer moves fast are further reduced by the good performance of the response time and accuracy of the tracking device.
An II system based on POVZ is proposed to enhance the viewing angle effectively without flipping and time delay even if the viewer moves quickly. The POVZs allot the viewing space according to the relationship between AEIAs and viewing zones. And the generation method for AEAs of the POVZ is also proposed. In the experiment, the viewing angle of the II system is 120° without flipping effect which is over twice of the maximum tracking angle of the Kinect. In addition, the viewing angle of our II system can be extended with a tracking device having better performance. Applying the POVZ for each viewer in the multi-viewer tracking II system, it may be possible to display the 3D images for each viewer with super wide viewing angle.
The work is supported by the “973” Program under Grant No. 2013CB328802, the NSFC under Grant Nos. 61225022 and 61320106015, and the “863” Program under Grant Nos. 2012AA011901 and 2012AA03A301.
References and links
1. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).
2. J. Hong, Y. Kim, H. J. Choi, J. Hahn, J. H. Park, H. Kim, S. W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]
3. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef] [PubMed]
6. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (Optical Society of America, 2009), DWB27.
7. G. Park, J. H. Jung, K. Hong, Y. Kim, Y. H. Kim, S. W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009). [CrossRef] [PubMed]
8. D. C. Hwang, J. S. Park, S. C. Kim, D. H. Shin, and E. S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef] [PubMed]
9. C. C. Ji, H. Deng, and Q. H. Wang, “Pixel extraction based integral imaging with controllable viewing direction,” J. Opt. 14(9), 095401 (2012). [CrossRef]
10. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012). [CrossRef] [PubMed]
11. S. H. Jiao, X. G. Wang, M. C. Zhou, W. M. Li, T. Hong, D. Nam, J. H. Lee, E. H. Wu, H. T. Wang, and J. Y. Kim, “Multiple ray cluster rendering for interactive integral imaging system,” Opt. Express 21(8), 10070–10086 (2013). [CrossRef] [PubMed]
13. G. Baasantseren, J. H. Park, K. C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express 17(16), 14405–14417 (2009). [CrossRef] [PubMed]
16. H. Choi, J. H. Park, J. Kim, S. W. Cho, and B. Lee, “Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array,” Opt. Express 13(21), 8424–8432 (2005). [CrossRef] [PubMed]
17. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007). [CrossRef] [PubMed]
19. R. Taherkhani and K. Mohammad, “Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor,” 3D Res. 3.3, 1–7 (2012).
20. C. C. Smyth, “Apparatus for tracking the human eye with a retinal scanning display, and method thereof,” U.S. Patent No. 6, 120, 461. 19 Sep. 2000.
21. J. Nakamura, T. Takahashi, and Y. Takaki, “Enlargement of viewing freedom of reduced-view SMV display,” in IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics (2012).
22. J. C. Yang, C. S. Wu, C. H. Hsiao, R. Y. Tsai, and Y. P. Hung, “Evaluation of an eye tracking technology for 3D display applications,” in 3DTV Conference (2008), p. 345. [CrossRef]
23. K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007).
24. K. C. Kwon, C. Park, M. U. Erdenebat, J. S. Jeong, J. H. Choi, N. Kim, J. H. Park, Y. T. Lim, and K. H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012). [CrossRef] [PubMed]
25. Kinect, http://www.kinectfordevelopers.com/. Kinect is a registered trademark of Microsoft Corporation in the United States and/or other countries.