A viewing angle enhanced integral imaging display using two elemental image masks is proposed. In our new method, rays emitted from the elemental images are directed by two masks into corresponding lenses. Due to the elemental image guiding of the masks, the number of elemental images for each integrated image point is increased, enhancing the viewing angle. The experimental result shows that the proposed method exhibits two times larger viewing angle than the conventional method with the same lens array.
© 2009 Optical Society of America
Although we are living in a three-dimensional (3D) world, almost all displays present two-dimensional (2D) images. 3D displays are needed for some important applications such as a virtual surgery, a 3D design, and a remote control, because 3D image includes extra information that is helpful to human. Therefore, 3D display is interesting research field. Some 3D technologies have been developed, for example, stereoscopy , autostereoscopy , holography [3, 4], and integral imaging [5, 6]. Among them, the integral imaging technology has some advantages  that it does not require any special glasses and has continuous viewpoints  within the viewing angle. It also provides full parallax, full color, and real time . However, conventional integral imaging display (IID) has drawbacks such as small viewing angle (VA) [10, 11], limited viewing resolution [12, 13], and short depth range. In this paper, we focus on the VA issue.
Conventional IID has simple structure comprising elemental lens array and 2D display panel, as shown in Fig. 1. The elemental images (EIs) are displayed on the 2D display panel and integrated into 3D images by the lens array. 3D integrated point appears at cross-section of the collected rays that are emitted from different EIs. Each different EI includes different directional information so the viewers can feel different perspective. For example, a 3D image point P1 is reconstructed by three different rays that are emitted from EI2, EI3, and EI4 and collected by elemental lenses L2, L3, and L4, respectively. The number of the rays that contribute to the integration of the 3D image point is limited by the size of the EI region. In the conventional IID, the size of EI region is equal to the size of elemental lens. If the size of the EI region is larger, the overlapping problem occurs between adjacent EI regions. The moving observer also sees duplicated integrated images because EI can be projected through the neighboring elemental lens. In case of the integrated point P1 which is shown in Fig. 1, this limitation restricts the number of the rays to 3. The VA of the integrated object point is determined by the angle between two extreme rays. Since the angular separation between rays is uniform, the angle between two extreme rays is determined by the number of rays. Therefore, VA of the integrated object point is determined by the number of rays and fundamentally limited by the size of EI region.
The VA depends on the position of the 3D integrated point. P1 and P2 are on the one line along horizontal axis but VA of P1 and P2 are not equal. P2 and P3 are located at a same depth plane but VAs of those two points are also different.
S. Jung et al.  proposed VA enhanced system using elemental lens switching. They combined spatial and temporal multiplexing by using double display devices and orthogonal polarizations, achieving VA of left 9.6 deg and up 13.4 deg. This system, however, is bulky because it requires two display devices with large beam splitter. Y. Kim et al.  used a flexible screen and a curved lens array. The effective maximum VA of this system is approximately 33° for a real image and 40° for a virtual image. However, their method can enhance only horizontal VA, leaving vertical VA unchanged. D.-H. Shin et al.  used a large-aperture lens that achieves the similar effect to the curved lens array with much simpler configuration. The VA of this system is enhanced along both of horizontal and vertical directions, but the experimentally achieved maximum VA is limited to approximately 21.5° and 20.2° for the horizontal and the vertical directions, respectively. In the followings, we explain the principle of the proposed method that is not bulky and enhances both of the horizontal and vertical VA. Experimental verification of the feasibility is also provided.
2. Proposed method
where g is the gap between lens array and the display panel and PL is the size of the elemental lens.
If the size of EI region is not limited to the size of elemental lens and the location of EI is not confined to the corresponding elemental lens, the angle between two extreme rays among rays that are collected to reconstruct 3D point will be increased. In this purpose, the proposed method uses two additional spatial light modulators (SLMs). New EIs are guided by two SLMs into corresponding elemental lenses. Due to the ray guiding of two SLMs, the EIs do not need to be confined in an elemental lens size and location, therefore the VA is enhanced. Figure 2 shows the concept of the proposed method. In Fig. 2, a 3D pixel P1 is formed by eight rays emitted from EP1,1, EP1,2, EP1,3, EP1,4, EP1,5, EP1,6, EP1,7, and EP1,8. Note that 4 EIs from EP1,1, EP1,2, EP1,7, and EP1,8 are not located just behind corresponding lenses L4, L5, L9, and L10, respectively, hence they are not available in the conventional IID. In the proposed method, however, two SLMs make them useful by ray guiding. Those four rays from these extra EIs increase the VA of pixel P1 because the angle between two extreme rays increases.
From the geometric calculation, we can find the EI positions (x, y) corresponding to the integrated 3D point at (u, v, L) by
where PD is pixel size of display panel, L is integrated image distance, and i and j are lens index along X and Y axis, respectively. When a lateral size of object is h, we can determine the size of EI using Eqs. (2) and (3) by,
From Eq. (4), the sizes of all EIs are the same for one object and depend only on the object size.
Configuration of SLMs
Figure 3 shows EI points E1 and E2 guided into elemental lenses L1 and L2 respectively, in the two different configurations of SLMs. In Fig. 3(a), two pixels S1,1 and S1,2 of SLM are opened to guide rays R1,2 and R2,1 to elemental lens L1. However one SLM is not sufficient. One pixel of LCD emits rays with large angle and a size of SLM pixel is not small enough, hence some unnecessary rays emitted from E1 are not blocked properly and imaged by elemental lens L2, for example, R1,1. We use second SLM to block unnecessary rays that pass SLM1, as shown in Fig. 3(b). For example, ray R1,2 passes through SLM1 because pixel S1,1 of SLM1 is opened for E1, but it is blocked by SLM2. Therefore, two SLMs block unnecessary rays and guide rays into corresponding lenses successfully.
Although the use of two SLMs enhances the VA of the IID, it should be mentioned that low transmittance of the SLM can make the light efficiency of the proposed IID low. Polarization control can be also complex due to tandem configuration of two SLMs. If the SLMs are not sufficiently thin, the finite thickness of SLM can limit the VA of the IID. Finally, insufficient VA and resolution of the SLM can reduce the VA and the resolution of the proposed IID. Those limitations, however, come from current technology limit of the SLMs and expected to be largely overcome with technology development.
Even with the ray guiding of two SLMs, VA cannot be increased arbitrarily due to overlapping of the EIs. In Fig. 4, the two EIs of big object 2 are overlapped because the object size is big. Also there is another kind of overlapping caused by multiple objects. For example, EIs of objects 3 and 4 are overlapped in Fig. 4. Considering these two kinds of overlapping, we can define an image volume where we can display 3D images without overlapping of EIs.
In order to determine the image volume, let us first assume that the number of the elemental lenses is n and the maximum field of view (FOV) of each elemental lens is given by VAmax. Before we determine the image volume, we define the shared region, where 3D image points are integrated by all elemental lenses. The shared region depends on n and VAmax. Dot-dash lines show the VAmax of each lens in Fig. 5. If the 3D image point P1 is outside of the shared region, some lenses cannot contribute to the integration of P1 as P1 is outside of VA of those lenses. It reduces the VA of the 3D image point. To obtain maximal VA, the image volume should be inside the shared region. The minimum distance between the shared region and lens array is given by
The first factor to determine the image volume is the 3D image size. If the EI size of the object is smaller than or equal to the elemental lens size, the overlapping problem does not occur. In this condition, the maximum object lateral size SO is given by
According to this condition, we can draw two dot-lines in Fig. 5. If one object is located between those two lines, its EIs are not overlapped.
For the second overlapping problem of multi objects, we further limit the image volume. The EIs of two small objects 3 and 4 are overlapped in Fig. 4, because EI regions of neighboring elemental lenses for the separated objects are coincided in the EI plane. Hence we find the region of the EI of each elemental lens and limit the image volume such that EIs of neighboring elemental lenses of two different objects are not overlapped. For example, in Fig. 5, the upmost limit of the EI region of 1st elemental lens is the point C, when the object is located in the volume defined by the shared region and the region given by the 3D image size limitation which is previously explained. The EI of 2nd elemental lens should be located above C to avoid overlapping. The object region which makes the EI of 2nd elemental lens be located above C is the region below a line which joins C and the principal point of the 2nd elemental lens as shown in Fig. 5. This process can be repeated from the central elemental lens to the boundary elemental lens to find the object region which is free from multi-object overlapping.
The image volume is defined as an intersection area of the shared region, the region given by limitation of 3D image size, and the region given by limitation of positions of the multi objects, as shown in Fig. 5. The location and the size of the image volume can be described by the smallest distance Lmin, the lateral size hSS and the distance LSS which are the lateral separation and distance from the lens array of the intersections of the shared region lines and the 3D image size limitation lines, and the lateral size hSP and the distance LSP at the intersections of the 3D image size limitation lines and the image position limitation lines. Last four-parameters are given by,
, respectively. In one system configuration of PL = 5 mm, g = 14 mm, VA max = 60°, the image volume depends on the lens number n, as shown in Fig. 6. According to Fig. 6, the overall size of the image volume increases, if the lens number n increases. It can also be observed that the image volume is shifted far from the lens array as the lens number increases.
4. The Viewing angle
The VA of each pixel inside the image volume is not uniform. The VA of the 3D pixel depends on both of the distance and the lateral position. From Fig. 5, the VA is defined by,
where L is distance along Z axis and u is lateral position along Y axis. Figure 7 shows VA of 3D image points inside the image volume when n = 2 and n = 3. Since the Vas of 3D image points in the same depth plane are different, the relation between the VA and the distance is not thin a curve in Fig. 7(b). Also VA decreases when the distance increases.
5. Experimental results and discussion
Proposed system configuration
From the Figs. 6 and 7, if the number of the elemental lenses is increased, the size of the image volume gets larger and its location moves far from the lens array. The number of the elemental lenses cannot be increased arbitrarily since the location of the image volume should be around the central depth plane. In the system configuration, the gap g and the focal length of elemental lens array are constants, determining the central depth plane by Gaussian lens equation . IID system can produce 3D integrated images only near the central depth plane. If the 3D image is far from the central depth plane, that image is off-focused and not integrated well. Therefore the 3D images should be located within tolerable depth range around the central depth plane. In the proposed method, the system parameters should be adjusted such that the central depth plane and the tolerable depth range are well located inside the image volume.
Figure 8 shows four different image volumes, where VA of the 3D image point is between 40° and 60°, with the depth. The one-sided tolerable depth range is determined to be 13 mm from the central depth plane experimentally when the gap g is between 13 mm and 15 mm and a pitch of display is 0.2505 mm. The two lines in Fig. 8 denote this one-sided tolerable depth range. From the four image volumes shown in Fig. 8, we chose that n and gap are equal to 2 and 14 mm, respectively, for experiment, because the overlap between the depth range and the image volume is the largest in that case. In Figs. 8(c) and (d), n is equal to 3, so the image volumes are larger than Fig. 8(a) and (b), but the overlap between the image volume and the depth range is small. In Fig. 8(a), the overlap is large but the size of the image volume is smaller than Fig. 8(b) case.
In the experiment, we used three objects “O”, “I”, and “P” that are inside the image volume and far from the lens array 24 mm, 22 mm, and 26 mm, respectively, as shown in Fig. 9. Note that they are located within the tolerable depth range since their deviation from the central depth plane is approximately 4 mm, 6 mm, and 8 mm which are much smaller than the experimentally determined limit 13 mm. The gap is g = 14 mm, a pixel pitch of elemental lens is PL = 5 mm, a focal length of lens array is f = 8 mm, a distance between the display and a mask 1 is 2 mm, and a distance between the display and a mask 2 is 4 mm. The LCD display of 2560 × 1600 pixels resolution and 0.2505 × 0.2505 mm2 pixel pitch was used.
The proposed system needs one set of EIs for display panel and two mask images for SLM1 and SLM2. Figure 10 shows the generated EIs and the mask images. The EIs and mask images are created using Eqs. (22) and (3) like conventional IID. Note that even though same formula is used the EI calculation, the number of the EIs is larger in the proposed method due to the ray guiding. Figure 10(d) shows the EIs for the conventional IID with the same system configuration. From Fig. 10(a) and (d), the number of EIs of the proposed method is larger than that of the conventional system. Those many EIs are guided by SLM1 and SLM2 and contribute to the enhancement of the VA.
In our experimental implementation, two transparent films in experiment are used instead of the SLMs, because the physical dimensions of the SLMs are too large to be fitted in the system with required parameters.
The experimental results are shown in Figs. 11-13. Figure 11 shows the role of the masks in the proposed method. If the new EIs for the proposed method are displayed without masks, many flipped 3D images appear, as shown in Fig. 11(a), because EIs are imaged by non-corresponding lenses. This image flipping is the main cause limiting the EI area to the area just behind the corresponding lens in the conventional IID, which reduces the VA. When the new EIs are displayed with the masks, just one 3D image appears in the center of proposed display system, because the masks successfully guide the EIs to their corresponding elemental lenses.
From Figs. 12 and 13, it can be confirmed that the proposed method successfully creates wide viewable 3D integrated images. Moreover, the VA is enhanced not only along the horizontal direction but also along the vertical direction. The VA of the proposed system reaches 38° both in horizontal and vertical directions experimentally. The VA of the conventional system is calculated to be 20° by Eq. (1) but measured VA is less than 20° because 20° is just theoretical maximum VA of one particular 3D point. From Fig. 13, just center image is clear and other images are distorted, because those images are out of the VA. The proposed method also does not put any limitation on the color performance. However, in Fig. 12, we can see some distortions. It is because the elemental lenses are simple spherical lenses which have significant aberration. For example, objects are curved in Figs. 12 (a), (c), (g), and (i). The center image shown in Fig. 12(e) is relatively free from aberration because the lens aberration is reduced for paraxial image. Also Fig. 12(a), (c), and (g) show lens boundary artifact which is common for IIDs.
Figure 14 is the movie of the 3D images displayed by the proposed and conventional methods.
We proposed a novel method to enhance the VA of the integral imaging system using two masks. Because two mask images of the SLMs guide the EIs into corresponding elemental lenses, the size and the location of EIs for the display panel are not limited and not fixed. These new free EIs provide the increase of the collected rays at the 3D image point so that the VA is enhanced accordingly. From the results of the experiment, it is confirmed that the two mask images for SLMs guide EIs into corresponding lenses and block unnecessary rays successfully. We also defined the image volume where EIs of 3D integrated images are not overlapping each other. We experimentally achieved 38° VA along both in horizontal and vertical directions. The VA limiting factor in the experiment was the lens aberration.
This research was partly supported by the MKE (The Ministry of Knowledge Economy), Korea under the ITRC (Information Technology Research Center) Support program supervised by the IITA (Institute for Information Technology Advancement) (IITA-2009-C1090-0902-0018). This work was partly supported by the grant of the Korean Ministry of Education, Science and Technology. (The Regional Core Research Program / Chungbuk BIT Research-Oriented University Consortium).
References and links
1. J. L. Fergason, S. D. Robinson, C. W. McLaughlin, B. Brown, A. Abileah, T. E. Baker, and P. J. Green, “An innovative beamsplitter-based stereoscopic/3D display design,” Proc. SPIE 5664, 488–494 (2005). [CrossRef]
2. N. A. Dodgson, “Autostereoscopic 3D displays,” Computer 38(8), 31–36 (2005). [CrossRef]
3. B. P. Ketchel, C. A. Heid, G. L. Wood, M. J. Miller, A. G. Mott, R. J. Anderson, and G. J. Salamo, “Three-dimensional color holographic display,” Appl. Opt. 38(29), 6159–6166 (1999), http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-38-29-6159. [CrossRef]
4. T. Mishina and M. Okui, “Reconstruction of Three-Dimensional Images of Real Objects by Electronic Holography,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD) (Optical Society of America, 2008), paper DMC1. http://www.opticsinfobase.org/abstract.cfm?URI=DH-2008-DMC1.
5. G. Lippmann, “La Photographie Integrale,” Comptes-Rendus Academie des Sciences 146, 446 (1908).
6. A. Stern and B. Javidi, “Three dimensional Sensing, Visualization, and Processing using Integral Imaging,” Proceedings of IEEE Journal, special issue on 3-D technologies for imaging and display , 94, 591–607 (2006).
7. B. Lee, J.-H. Park, and S.-W. Min, “Three-dimensional display and information processing based on integral imaging,” in Digital Holography and Three-Dimensional Display, T.-C. Poon, eds. (Springer, New York, USA, 2006). [CrossRef]
8. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-12-8800. [CrossRef] [PubMed]
10. J.-H. Park, S. W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001). [CrossRef]
11. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15(24), 16255–16260 (2007), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-24-16255. [CrossRef] [PubMed]
12. D.-H. Shin, B. Lee, and E.-S. Kim, “Effect of illumination in an integral imaging system with large depth of focus,” Appl. Opt. 44(36), 7749–7753 (2005), http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-44-36-7749. [CrossRef] [PubMed]
13. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. 28(16), 1421–1423 (2003), http://www.opticsinfobase.org/ol/abstract.cfm?URI=ol-28-16-1421. [CrossRef] [PubMed]
14. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11(12), 1346–1356 (2003), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-11-12-1346. [CrossRef] [PubMed]
15. Y. Kim, J.-H. Park, S.-W. Min, S. Jung, H. Choi, and B. Lee, “Wide-viewing-angle integral three-dimensional imaging system by curving a screen and a lens array,” Appl. Opt. 44(4), 546–552 (2005), http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-44-4-546. [CrossRef] [PubMed]
17. F. Yu and X. Yang, Introduction to Optical Engineering (Cambridge University Press, 1997), Chap. 2.