This paper proposes a high resolution integral imaging system using a lens array composed of non-uniform decentered elemental lenses. One of the problems of integral imaging is the trade-off relationship between the resolution and the number of views. When the number of views is small, motion parallax becomes strongly discrete to maintain the viewing angle. In order to overcome this trade-off, the proposed method uses the elemental lenses whose size is smaller than that of the elemental images. To keep the images generated by the elemental lenses at constant depth, the lens array is designed so that the optical centers of elemental lenses may be located in the centers of elemental images, not in the centers of elemental lenses. To compensate optical distortion, new image rendering algorithm is developed so that undistorted 3D image may be presented with a non-uniform lens array. The proposed design of lens array can be applied to integral volumetric imaging, where display panels are layered to show volumetric images in the scheme of integral imaging.
© 2012 OSA
Nowadays various stereoscopic displays have been developed for entertainment, medical diagnosis and so on. Most autostereoscopic displays on the market are based on parallax barrier or lenticular lens methods, where different images are cast to each eye to attain stereo vision. These conventional systems, however, have a couple of significant problems. The first problem is the limited viewing angle. When the observer is positioned straight in front of the display, the correct image is given, while the geometry of presented space is skewed when viewed from the side-lobe area, and the depth is totally reversed when the viewer is positioned between the main-lobe and the sub-lobe. The second problem is the vergence-accommodation conflict caused by the contradiction between the depth given by the binocular parallax and that calculated from focusing of the eyes, which leads to eye strain or 3D sickness of the observer.
Various displays have been researched and developed to solve these problems. Viewing angle of autostereoscopic displays using parallax barrier or lenticular lens can be widened by head-tracking the observer and updating the presented image, while vergence-accommodation conflict remains with this setup. One way to solve the problem of vergence-accommodation conflict is to use volumetric displays, which realize natural depth representation by layering or modulating image planes [1–3]. Volumetric displays, however, are incapable of reconstructing scenes with viewer-position-dependent effects, such as occlusion and glossy surface.
To realize viewer-position-dependent effects and natural vergence-accommodation relationship simultaneously, course integral volumetric imaging (CIVI) has been proposed [4–7]. This method combines multiview and volumetric display solutions and achieves natural depth representation and wide view angle. One of the problems of this method is the trade-off relationship between the resolution and the number of views as described above. If the resolution is scaled up, the number of views is reduced and the motion parallax becomes strongly discrete. If the number of views is increased, the resolution is scaled down. One way to solve this problem is to scale down the pixel pitch of the display panel, which requires expensive hardware innovations. Another way to solve this problem is just to scale up the elemental image without changing the scale of elemental lenses, which in turn causes distortion of image.
In this research, we realize a high resolution CIVI maintaining the number of viewpoints by casting large images onto the lens array consisting of elemental lenses whose optical axis is positioned at the center of elemental images, not at the center of elemental lenses.
This paper is organized as follows. In Section 2 the principle of CIVI is reviewed. In Section 3 CIVI with decentered elemental lenses to realize a high resolution 3D image is proposed and explained. In Section 4 the experimental result of the proposed scheme is given. In Section 5 we summarize this paper.
2. Coarse integral volumetric imaging
Integral imaging , which combines a convex lens array with a high resolution display panel, is a prominent 3D display system in the sense that it can show not only horizontal parallax but also vertical parallax. In the conventional integral imaging, the number of pixels each elemental lens covers is usually the same as the number of views, which means that the viewer perceives each elemental lens as a single pixel. Therefore the focus of viewer’s eyes is always fixed close to the lens sheet, which makes it hard to show realistic images far beyond the screen or popping up from the screen.
Besides the orthodox integral imaging described above, we can also think of an integral imaging system where each elemental lens is large enough to cover pixels dozens of times more than the number of views . Kakeya defined this type of integral imaging as coarse integral imaging (CII) . In CII each elemental lens covers dozens by dozens or hundreds by hundreds pixels. The observable number of pixels through each elemental lens is equal to the number of pixels covered by each lens divided by the square of magnifying power of the optical system. For example, when 100 × 100 pixels are covered by each elemental lens of the convex lens array and the magnifying power given by the pair of lenses (elemental lens and large aperture lens) is 10, 100 pixels are observed through each elemental lens.
With CII we can express 3D space far beyond the screen or popping up from the screen. To present 3D space far beyond the screen we can use virtual image, which is generated when the distance between the display panel and the lens array is nearer than the focal length of the elemental lenses. We can express 3D objects popping up from the screen by generating real image, which emerges when the distance between the display panel and the lens array is farther than the focal length of the elemental lenses. In the actual real-image CII we usually use a large aperture Fresnel lens in addition to the convex lens array to converge light, for the aberration becomes small with this configuration. The distance between the display panel and the lens array is kept almost the same as the focal distance of the elemental lenses of the lens array so that the light may be collimated by each elemental lens. Then the large aperture convex Fresnel lens converges the collimated light and generates a real image at the focal distance of the large Fresnel lens away from its surface.
The distance between the large aperture convex lens and the convex lens array can vary. When the distance is short, the optics is similar to that of the traditional integral imaging and the total system size can be kept compact. It can also be a multiview system where the whole image observed in each eye switches alternately when we keep the distance between the lens array and the large aperture Fresnel lens long enough to generate the real image of the lens array, which corresponds to the viewing zone where the view for each eye changes alternately .
Though CII can show images off the screen, the problem of vergence-accommodation conflict still exists, for it can generate only one image plane. Besides the vergence-accommodation conflict, CII has another major problem that can severely damage the quality of image. When the distance between the lens array and the large aperture Fresnel lens is not far enough, multiple images from different elemental lenses are observed at the same time. In this case discontinuity of the images from different lenses becomes severe when the 3D image to be shown has large depth. To show depth of the image, CII depends only on the parallax of multiview images. The parallax among the images from different lenses has to be larger as the depth of the 3D object to be shown becomes longer. Consequently discontinuity of the images on the boundaries of the lenses ruins the quality of image.
One way to solve the problem of vergence-accommodation conflict and discontinuity of image at the same time is to introduce volumetric approach using multilayered panels in addition to the multiview approach [11–16]. In the scheme of coarse integral imaging, Kakeya et al. proposed coarse integral volumetric imaging (CIVI) as shown in Fig. 1 . In CIVI multiple display panels are inserted to generate volumetric real image to keep the parallax between the images from two adjacent lenses small enough. Since artificial parallax is kept small, discontinuity between images from adjacent lenses are also kept small. Vergence-accommodation conflict is also reduced since each 3D pixel is displayed at the real image layer near the right depth. To express pixels between two panels we can use DFD approach [17–19], where 3D pixels are expressed with two adjacent panels, each of which has the pixel intensity in inverse proportion to the distance between the 3D pixels and the panel. Thus natural continuity of depth is realized.
Here it should be noted that the layered real images generated by the lenses are curved and distorted. Not only the generated image plane is distorted, but also the image planes generated by elemental lenses of the lens array are not uniform. The image planes generated by the elemental lenses off the optical axis of the large aperture Fresnel lens do not have line symmetry about the optical axis, but are slanted toward the optical axis. The slant becomes greater as the elemental lens goes farther from the optical axis. Without taking into account this distortion problem, smooth connection of elemental images cannot be realized. To cope with these distortions we can apply DFD for distorted image plane as shown in Fig. 2 , which can realize natural connections between elemental images.
To realize DFD for distorted image plane, information on the shape of distortion is needed. Also the distortion in horizontal and vertical directions becomes distinct when the viewing angle becomes wider. Therefore the distortion in horizontal and vertical directions should be obtained and corrected to show undistorted image. The information on distortion can be calculated by optical simulation. We can calculate the light rays from the viewpoint to the display panel to obtain how the images are distorted. Then texture mapping is used to correct the distortion based on the result of optical simulation . As for the correction of distortion in the direction of depth, we calculate refraction of light rays from the points on the display panel and obtain the points where the light rays converge.
3. CIVI with decentered elemental lenses
One of the methods to realize a high resolution 3D image is to scale up the resolution of elemental images, which can be attained just by using a high resolution display panel with fine pixel pitch. This method, however, can be quite expensive and also has technical limitations due to the restriction of hardware manufacturing. The other solution is to scale up the elemental images, which makes the motion parallax more discrete when the size of elemental lenses is also enlarged. This problem can be avoided by keeping the size of elemental image as shown in Fig. 3 . In this case the size of the elemental images becomes larger than that of the elemental lenses, which leads to the shift of image center from the lens center. This shift becomes larger as the elemental image goes farther from the center of display, which leads to increase of distance between the elemental image and the corresponding elemental lens.
When the distance between the elemental image and the elemental lens is much longer than the focal length of the elemental lens, the image plane is generated quite close to the elemental lens. Thus the depths of image planes generated from the edge elemental lenses become far apart from those generated by the elemental lenses in the center as shown in Fig. 4 . Though some of the distortion can be corrected on the software basis, software correction in the depth direction is possible only within the interval between the nearest and farthest image planes. Therefore software correction does not work when the distortion in the depth direction is too wide.
To solve the problem, in this paper we propose a new optical design where the lens array consists of the elemental lenses whose optical axis is positioned at the center of elemental images, not at the geometrical center of elemental lenses as shown in Fig. 5 . In the conventional CIVI, the optical axis of elemental lenses is positioned at the geometrical center of elemental lenses. In the new CIVI the lens array consists of decentered elemental lenses with the same focal distance. An example of decentered elemental lens is shown in Fig. 6 . Decentered elemental lenses can be cut out from a large lens. Since the proposed non-uniform lens array can be generated by putting together portions of lenses cut from limited number of lenses, we can realize a multiview system with a large number of views (elemental lenses) with relatively low cost.
The decentered elemental lens array is set so that the distance between the display and the lens array may be close to the focal distance of elemental lens. With this optical setting, not only the light rays from the central part of display but also the light rays from the peripheral part of display are changed into parallel light by refraction at the decentered lens. As a result the central and peripheral elemental lenses can generate image planes at close depths as shown in Fig. 7 .
The method proposed here can be regarded as a kind of integral imaging composed of non-uniform elemental lenses. One example of integral imaging with non-unform elemental lenses is the system using elemental lenses with different focal lengths and aperture sizes , where lens array with different focal lengths is used to enhance depth of image, while fast modulation of the lens array is applied to improve resolution of the presented 3D image. Use of decentered lens array for 3D display can be found in the research on multiview projection display system , which consists of a single projector and an image steering projection screen composed of a decentered micro lens array. In this system an image projected from the projector is steered to different directions using decentered micro lens arrays to realize wide viewing angle. The system proposed here is unique on the point that it uses convex lens array composed of coarse decentered lenses.
When coarse elemental lenses are used for generation of image, software distortion correction plays the critical role to present an undistorted image. Distortion correction algorithm for the proposed system with decentered lens array is similar to the conventional algorithm. We can obtain the information on distortion by optical simulation. The main difference is how to obtain the viewpoint for projection transformation. In the conventional algorithm the viewpoint is supposed to be on the light ray that goes through the center of elemental image and the center of elemental lens. When the decentered lenses are used, however, this approximation does not hold because the lens center is not always located inside each elemental lens.
In this research, distortions are corrected as follows. First we project light rays from the lattice points in each elemental image on the display panel to the geometrical center of elemental lenses. Then, we calculate how the rays are refracted and in which direction the rays go. We define the point where the extended rays that passes through the large aperture lens converge most as the viewpoint of perspective transformation.
If the distance between the lens array and the large aperture lens is longer than the focal length of the latter, the position where the rays converge is located on the side of the observer. On the other hand, if the distance between the lens array and the large aperture lens is shorter than the focal length of the latter, the position where the rays converge is located on the opposite side of the observer as shown in Fig. 8 . In this case, perspective transformation in the opposite direction is applied as shown in Fig. 9 . When the objects are overlapping one another in the depth direction, the standard projection function works so that a pixel of farther image may be overwritten by a pixel of nearer image by referring to the depth buffer of each pixel. When the viewpoint of perspective transformation is on the opposite side of the observer, however, a pixel of image nearer the viewpoint of projection should be overwritten by a farther pixel.
Distortion is corrected based on the result of optical simulation so that the images acquired by perspective transformation can be observed by the viewer. The distortion correction procedure is shown in Fig. 10 . Distorted grating pattern reflecting optical distortion is acquired by plotting the points where the calculated light rays intersect with the projection plane. The image obtained by perspective transformation is divided into tetragon areas of distorted grating pattern. We map each divided tetragon texture onto the square area in the original grating pattern. This process is applied to each RGB color so that color aberration may be taken into account and corrected. By depicting inversely distorted elemental images on each display in this way, undistorted image with little color aberration can be shown to the observer
For volumetric expression, we divide the elemental images acquired by the previous step in the depth direction by using DFD method. To realize it, we have to calculate where each panel forms the image plane in the depth direction. Image planes are defined as the planes including the points where the rays cast from the lattice points on the element image to the elemental lens are converged. The position of each image plane is calculated by applying this process for each display panel positioned in the different depth as shown in Fig. 11 .
DFD algorithm for CIVI depicts a pixel by distributing intensity of each pixel to two display panels generating two closest image planes. The color of elemental image on each display panel is changed by applying this process to each color whose refractive index is different.
Based on the algorithm of distortion correction discussed in the previous section, we made a CIVI system with decentered elemental lenses. Detailed specifications of this prototype system are as follows:
- ・ Size of display: 22 inch,
- ・ Resolution of display: 3840 × 2400 pixels,
- ・ Number of display layers: 2,
- ・ Number of elemental lenses: 10 × 6,
- ・ Focal length of elemental lenses: 90 mm,
- ・ Size of elemental lenses: 38 mm × 38 mm,
- ・ Size of elemental images: 48 mm × 48 mm,
- ・ Distance between elemental lenses and display panels (2 layers): 90 mm, 93 mm,
- ・ Distance between elemental lenses and large aperture lens: 380 mm,
- ・ Focal length of large aperture lens: 325 mm,
- ・ Size of large aperture lens: 400 mm × 300 mm.
The elemental images to be drawn on two display panels calculated by the algorithm explained in the previous section are shown in Fig. 12 . The image is distorted inversely and the colors of two image planes are different because color aberration is taken into account.
The hardware of the prototype system is shown in Fig. 13 and the image observed with this prototype system is shown in Fig. 14 . As these figures show, undistorted images are observed by the viewer. In the optical setting of this prototype system, the size of elemental images is (48/38)2 times larger than that of elemental lenses, which means that the image with 1.59 times higher resolution can be presented.
To compare the proposed system with the conventional system, the images generated by 35 mm × 35 mm uniform (non-decentered) elemental lenses and 45 mm × 45 mm elemental images are shown in Fig. 15 . The image observed from the central viewpoint (Fig. 15(a)) is slightly distorted, while the image from right viewpoint (Fig. 15(b)) is strongly distorted, which means the distortion is too strong to be removed on the software basis.
This paper proposes an integral volumetric imaging system using a lens array composed of decentered elemental lenses. The proposed integral volumetric imaging can attain higher resolution of image by using elemental images larger than the elemental lenses. To cope with the difference of sizes between the elemental images and the elemental lenses, the lens array is designed so that the optical centers of elemental lenses are located in the geometrical centers of elemental images, not in the geometrical centers of elemental lenses. When we use the elemental lenses whose optical centers are in the centers of elemental images, the distances between the centers of elemental images and the optical centers of elemental lenses become constant and the distortion of image can be kept small in the depth direction. Another merit of the proposed lens array is lower manufacturing cost. Since the proposed non-uniform lens array can be generated by putting together portions of lenses cut from limited number of lenses, we can realize a multiview system with a large number of views with relatively low cost.
We also propose distortion correction algorithm for the proposed system with decentered lens array. First we project light rays from the lattice points in each elemental image on the display panel to the geometrical center of elemental lenses. Then, we calculate how the rays are refracted and in which direction the rays go. We define the point where the extended rays that passes through the large aperture lens converge most as the viewpoint of perspective transformation. Distortion is corrected based on the result of optical simulation so that the images acquired by perspective transformation can be observed by the viewer. By depicting inversely distorted elemental images on each display, undistorted image with little color aberration can be shown to the observer. For volumetric expression, we divide the elemental images in the depth direction by using DFD method.
The hardware of the prototype system with decentered elemental lenses is made and it is confirmed that undistorted images with higher resolution is observed by the viewer.
This research is partially supported by the Grant-in-Aid for Scientific Research, MEXT, Japan, Grant number: 22680008.
References and links
2. S. Suyama, M. Date, and H. Takada, “Three-dimensional display system with dual-frequency liquid-crystal varifocal lens,” Jpn. J. Appl. Phys. 1(2), 480–484 (2000). [CrossRef]
3. A. Sullivan, “DepthCube solid state 3D volumetric display,” Proc. SPIE 5291, 279–284 (2004). [CrossRef]
4. H. Kakeya, “Coarse integral imaging and its applications,” Proc. SPIE 6803, 680317, 680317-10 (2008). [CrossRef]
5. H. Kakeya, “Improving image quality of coarse integral volumetric display,” Proc. SPIE 7237, 723726, 723726-9 (2009). [CrossRef]
6. H. Kakeya, T. Kurokawa, and Y. Mano, “Electronic realization of coarse integral volumetric imaging with wide viewing angle,” Proc. SPIE 7524, 752411, 752411-10 (2010). [CrossRef]
8. G. Lippmann, “La photograhie integrale,” Comptes Rendus Acad. Sci. 146, 446–451 (1908).
9. B. Lee, S. Jung, S. W. Min, and J. H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26(19), 1481–1482 (2001). [CrossRef] [PubMed]
10. H. Kakeya, “MOEVision: simple multiview display with clear floating image,” Proc. SPIE 6490, 64900J, 64900J-8 (2007). [CrossRef]
13. R. Yasui, I. Matsuda, and H. Kakeya, “Combining volumetric edge display and multiview display for expression of natural 3D images,” Proc. SPIE 6055, 60550Y, 60550Y-9 (2006). [CrossRef]
14. H. Ebisu, T. Kimura, and H. Kakeya, “Realization of electronic 3D display combining multiview and volumetric solutions,” Proc. SPIE 6490, 64900Y, 64900Y-10 (2007). [CrossRef]
15. Y. Kim, J. H. Park, H. Choi, J. Kim, S. W. Cho, and B. Lee, “Depth-enhanced three-dimensional integral imaging by use of multilayered display devices,” Appl. Opt. 45(18), 4334–4343 (2006). [CrossRef] [PubMed]
16. Y. Kim, H. Choi, J. Kim, S. W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. 46(18), 3766–3773 (2007). [CrossRef] [PubMed]
17. S. Suyama, H. Takada, K. Uehira, S. Sakai, and S. Ohtsuka, “A novel direct-vision 3-D display using luminance-modulated two 2-D images displayed at different depths,” SID’00 Dig. Tech. Pap. 31(1), 1208–1211 (2000). [CrossRef]
18. S. Suyama, H. Takada, and S. Ohtsuka, “A direct-vision 3-D display using a new depth-fusing perceptual phenomenon in 2-D displays with different depths,” IEICE Trans. on Electron., E85-C, 1911–1915 (2002).
19. S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance-modulated two 2-D images displayed at different depths,” Vision Res. 44(8), 785–793 (2004). [CrossRef] [PubMed]
20. S. Sawada and H. Kakeya, “Coarse Integral Volumetric Imaging with Flat Screen and Wide Viewing Angle,” J. Electron. Imaging 21(1), 011004 (2012). [CrossRef]
21. H. Deng, Q. Wang, D. Li, and F. Wang, “An integral imaging display with wide viewing angle,” SID’11 Dig. Tech. Pap. 42(1), 1095–1097 (2011). [CrossRef]
22. J. S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes,” Opt. Lett. 28(20), 1924–1926 (2003). [CrossRef] [PubMed]
23. L. Bogaert, Y. Meuret, S. Roelandt, A. Avci, H. De Smet, and H. Thienpont, “Demonstration of a multiview projection display using decentered microlens arrays,” Opt. Express 18(25), 26092–26106 (2010). [CrossRef] [PubMed]