We present a simple method to pick up (sense) large objects that are far away, and then display their three-dimensional images within the depth of focus of projection integral imaging systems. For this purpose, we propose to use either curved pickup devices or curved display devices or both. In this method, as the object distance increases, the longitudinal image depth reduces in a nonlinear way, while the lateral size reduces in a linear way. To reduce the depth of reconstructed images alone, a method to zoom in elemental images can be used. We analyze the two methods when they are used together. Experiments are presented to show the feasibility of our approach.
© 2004 Optical Society of America
A significant amount of efforts have been made for three-dimensional (3-D) displays, such as 3-D television, video, and movie. However, so far a technology to enable high-resolution large-scale 3-D color images seems elusive [1–3]. The most successful 3D display technique developed to date is stereoscopic. It is relatively simple to realize a stereoscopic system that can display large images with high resolution . However, stereoscopic techniques usually require supplementary glasses to evoke 3-D visual effect, and provide observers with only horizontal parallax and a limited number of viewpoints. Observation of stereoscopic images may cause visual fatigue because of convergence-accommodation conflict .
Convergence-accommodation conflict can be avoided by means of true 3-D image formation in space with full parallax and continuous viewing points. Holography is one of the best ways to form 3-D images in space, but recording full-color holograms for an outdoor scene is difficult [4–6]. When computer-generated holograms are prepared, a large amount of computation time is required to obtain proper gratings. Because coherent light is often used in holography, speckle is also a problem.
It is desirable to produce true 3-D images in space with incoherent light using two-dimensional (2-D) display devices. For this purpose, an alternative 3-D image formation technique based on ray optics, called integral imaging (II), has been studied [7–21]. In II, 3-D images are formed by crossing the rays coming from 2-D elemental images using a lenslet array. As in holography, II can provide observers with true 3-D images with full parallax and continuous viewing points. However, there are also serious drawbacks in II: The viewing angle, depth-of-focus, and resolution of 3-D images are limited [12–15]. In addition, 3-D images produced in direct-pickup II are pseudoscopic (depth-reversed) images . To overcome these limitations, a number of techniques have been presented, which may make II systems more complex and thus more impractical [13–18].
Recently, we developed a projection integral imaging (PII) system, in which a micro-convex-mirror array is used as a screen [19, 20]. In PII, it is easy to solve a few critical problems that make commercial use of II difficult. For example, the viewing angle improves dramatically, and the pseudoscopic to orthoscopic image conversion (P/O conversion) is not required when direct camera pickup is used. In addition, 3-D image resolution can be improved, if large-scale high-resolution elemental images are projected onto a screen with a large number of micro-convex mirrors. For practical 3-D display applications such as including 3-D television, video, and movie, PII needs to be able to handle large 3-D objects that may be far away from the sensor. So far it is not certain how such objects can be displayed in II system which has limited depth-of-focus.
In this paper, we present a method to control the depth and lateral size of reconstructed 3-D images. This technique allows to pick up large 3-D objects that may be far away, and then to display their demagnified 3-D images within the depth-of-focus of II systems. We show that curved pickup devices (i.e., a curved 2-D image sensor and a curved lenslet array) or curved display devices or both may be used for this purpose. When the lenslets in the curved array have a zooming capability, a linear depth control is additionally possible. We analyze the two methods when they are used together. In experiments to demonstrate the feasibility of our method, we use planar pickup devices (lenslet array, sensor, and display). An additional large aperture negative lens is placed in contact with the pickup lenslet array. We show that this arrangement is functionally equivalent to the curved pickup devices.
In fact, there were prior studies to use curved lenslet arrays for enhancement of viewing angle [15, 17]. However, to the best of our knowledge, this is the first report on depth and magnification control of 3-D integral images using curved pickup and display devices.
2. Review of integral imaging
2.1 Conventional integral imaging
In conventional II (CII), planar lenslet arrays with positive focal lengths have been used as depicted in Fig. 1. A set of elemental images of a 3-D object (i.e., direction and intensity information of the spatially sampled rays coming from the object) are obtained by use of a lenslet array and a 2-D image sensor such as a CCD or a CMOS image sensor, as depicted in Fig. 1(a). To reconstruct a 3-D image of the object, the set of 2-D elemental images are displayed in front of a lenslet array using a 2-D display panel, such as a liquid crystal display (LCD) panel, as depicted in Fig. 1(b). Suppose that the lenslet array with focal length f is positioned at z=0, and the display panel at z=-g. From the Gauss lens law
the gap distance g should be Lif/(Li -f)≡gr , where it is assumed that 3-D real images are formed around z=Li . The rays coming from elemental images converge to form a 3-D real image through the lenslet array. The reconstructed 3-D image is a pseudoscopic (depth-reversed) real image of the object. To convert the pseudoscopic image to an orthoscopic image, a process to rotate every elemental image by 180 degrees around its own center optic axis may be used . The orthoscopic image becomes a virtual image by this P/O conversion process. When the 3-D virtual image is formed around z=-Li , the gap distance g should be Lif/(Li +f)≡gv for optimal focusing from Eq. (1).
2.2 Projection integral imaging (PII)
In PII, the process to obtain elemental images is not different from that in CII. However, elemental images are projected through relay optics onto a lenslet array as depicted in Fig 2(a) and (b). Instead of the display lenslet array, it is also possible to use a micro-convex/concave-mirror array as a projection screen, as depicted in Figs. 2(c) and (d). When we use a lenslet array with a positive focal length or a micro-concave-mirror array, the in-focus plane of projected elemental images should be positioned at z=-gr , as depicted in Fig. 2(a) and (c). If we use P/O-converted elemental images to display 3-D orthoscopic virtual images, which are formed around z=-Li , the in-focus plane of projected elemental images should be positioned at z=-gv .
When we use a lenslet array with a negative focal length or a micro-convex-mirror array, 3-D orthoscopic virtual images are displayed without the P/O conversion . Suppose that 3-D images are formed around z=-Li and the focal length of the lenslet array (or the micro-convex-mirror array) is -f. Then, the gap distance g becomes Lif/(f-Li )≡-gr from Eq. (1). Thus the in-focus plane of projected elemental images should be positioned at z=+gr , as depicted in Fig. 2(b) and (d). On the other hand, when we display 3-D real images around z=Li , the in-focus plane of projected elemental images should be positioned at z=+gv . Because Li ≫f in both PII and CII, gr ≈gv ≈f.
2.3 Advantages of PII over CII
Because PII allows the use of a micro-convex-mirror array as a projection screen, PII has following significant advantages over CII:
First, viewing angle can be better enhanced. In II, the full viewing angle ψ is limited and determined approximately by 2×arctan[0.5/(f/#)], where f/# is the f number of the lenslet, when the fill factor of the lenslet array is close to 1 [14–16]. It is easier to make diffraction-limited (or aberration-free) convex mirrors with a small f/# than it is to make similar lenslets. Each convex mirror element could have an f/# smaller than 1. For example, if f/#=0.5, the viewing angle ψ becomes 90 degrees, which is acceptable for many practical applications.
Second, the P/O conversion is unnecessary, if a positive lenslet array is used for direct camera pickup.
Third, it is easy to realize 3-D movies with large screens even if a small size of display panels or film is used. This is because the display panel and the screen are separated, and thus the size of elemental images that are projected onto the screen can be controlled easily by use of the relay optics.
Forth, flipping-free observations of 3-D images are possible even if optical barriers are not used. This is because each elemental image can be projected only onto its corresponding micro-concave mirror.
Fifth, it is easy to implement spatial multiplexing or temporal multiplexing or both in PII . To display 3-D images with high-resolution and large depth-of-focus, the number of pixels in the entire set of elemental images should be sufficiently large. Because display panels that are currently available or near future cannot meet such requirement, spatial multiplexing or temporal multiplexing or both may be needed to display the entire set of high-resolution elemental images.
In the experiments, therefore, we use PII using a micro-convex-mirror array screen.
3. Longitudinal depth control of 3-D images
In general, 3-D images reconstructed in II systems have limited depth-of-focus δ. It was shown that δ cannot be larger than 1/(λρ 2) where λ is the display wavelength and ρ is the resolution of reconstructed 3-D images [13, 18]. ρ is defined as the inverse of the reconstructed image spot size. In PII, 3-D images with high resolution can be reconstructed only near the projection screen of micro-convex-mirror arrays (or the display lenslet array). Thus the depth-of-focus δ should be measured from the projection screen.
Suppose that we are trying to pickup an object positioned beyond the depth-of-focus range. Specifically, the front surface of the object, whose longitudinal thickness is T, is positioned at z=zo >δ. When the focal lengths of the pickup lenslets and the micro-convex-mirrors in the projection screen are equal in magnitude, a 3-D image is reconstructed either at z=zo for real image display or at z=-zo for virtual image display. Then, we cannot display a focused 3-D image because the image position is beyond the range of depth-of-focus. Therefore, we need to control the depth (and thus position) of reconstructed 3-D integral images to be displayed so that it can be reconstructed near the screen, i.e., within the depth-of-focus.
3.1 Linear depth control by zooming the elemental images
If the focal length of the pickup lenslet array fp is longer than that of the display micro-convex-mirror array fd , the longitudinal scale of reconstructed image space is reduced linearly by a factor of fd /fp ≡r while the lateral scale does not change. So if (zo +T)r<δ, the 3-D reconstructed image is well focused.
One solution to pickup objects at various longitudinal positions and display their images within the depth-of-focus of II systems, therefore, is to use a pickup lenslet array with a variable focal length fp or an array of micro-zoom lenses. If we increase fp by a factor of α, every elemental image is also magnified by that factor, according to geometrical optics. Therefore, digital zoom-in can be used, even if fp is fixed. In other words, by digitally magnifying every elemental images in a computer by a factor of α, we can change r as
Then, an orthoscopic virtual image is reconstructed at z=-rzo for the object positioned at z=zo in the pickup process.
Digital zoom-in degrades the resolution of elemental images in general. When zo →∞ and the object is very large, this linear control method cannot be used. So, we will consider a nonlinear depth control method.
3.2 Nonlinear depth control using curved pickup devices
For a large object that is far away, elemental images are almost identical because parallax of the object is small for neighboring pickup lenslets. When such elemental images are displayed in the II system, reconstructed image is seriously blurred and cannot be seen clearly. To solve this problem, we use curved pickup devices (i.e., a curved lenslet array and a curved 2-D image sensor) with a radius of curvature R, and then reconstruct 3-D images using planar display devices as depicted in Fig. 3(a) and (b), respectively. Similarly, planar pickup devices and curved display devices may be used as depicted in Fig. 3(c) and (d), respectively. We use the following sign convention: R>0, when the center of the curvature is positioned at the same side of the object (observer) in the pickup (display) process; and R<0 when it is positioned at the opposite side.
The use of a negatively curved pickup lenslet array increases disparity of neighboring elemental images. This is because pickup directions of the lenslets in a curved array are not parallel and thus their fields of view are more separated than those for a planar array. Such elemental images are equivalently obtained if we pick up the object of a reduced size near the pickup lenslet array. Therefore, when elemental images with increased disparity are displayed on a planar display screen (a micro-convex-mirror array), an integral image with a reduced size is reconstructed near the screen. By controlling R, 3-D images of large objects that are far away can be displayed within the depth-of-focus of the II system.
The effect of depth and size reduction using the negatively curved pickup lenslet array can be analyzed by introducing a hypothetical thin lens with a negative focal length -Rp , which is in contact with the planar pickup lenslet array, as depicted in Fig. 3(e). This is because ray propagation behaviors for the two setups in Fig. 3(a) and 3(e), and those in Fig. 3(d) and 3(f) are the same, respectively. We call this lens an optical path-length-equalizing (OPLE) lens. When two thin lenses with focal length f 1 and f 2 are in contact, the effective focal length becomes f 1 f 2/(f 1+f 2). To get complete equivalence between the two setups, the focal length of the lenslet array that is in contact with the OPLE lens should be =Rpfp /(Rp +fp ), where fp is the focal length of the curved pickup lenslets. In general, Rp ≫fp and thus ≈fp . Therefore, instead of using the curved pickup lenslet array with a radius of curvature -Rp and a focal length fp , and a curved image sensor in the analysis, we can use a planar lenslet array with a focal length , a flat image sensor, and the pickup OPLE lens with a focal length -Rp .
We consider that the OPLE lens first produces images of objects, and then the images are actually picked up by the planar pickup devices to produce elemental images with increased disparity. For an object positioned at z=zo (>0), the OPLE lens produces its image according to Eq. (1) at
As zo varies from ∞ to 0, zi changes from Rp to 0. The elemental images with increased disparity are projected onto a planar micro-convex-mirror array screen, a virtual image is reconstructed at z=-zi if fd =fp . Therefore, Rp should be shorter than the depth-of-focus of the II system. Lateral magnification of the OPLE lens is given by zi /zo (<1) according to geometrical optics.
The effect of depth and size reduction can also be achieved by use of negatively curved display devices. Suppose that curved display devices with a radius of curvature -Rd are used, while elemental images are obtained by use of planar pickup devices. As before, we introduce a hypothetical display OPLE lens to planar display devices. Then, an orthoscopic virtual image of the object is reconstructed at
for the object positioned at z=zo (>0) in the pickup process, if fd =fp .
3.3 Combination of linear and nonlinear depth control methods
In general, we can use both linear and nonlinear depth control methods together. For an object positioned at z=zo , we can predict the position of the reconstructed image from the equivalent planar pickup and display devices with OPLE lenses. The pickup OPLE lens produces an image of the object at z=zi where zi is given in Eq. (3). From this image, elemental images with increased disparity are obtained and then they are digitally zoomed-in. Then, the planar display lenslet array produces an intermediate reconstructed image at z=-rzi where r is given in Eq. (2). Because of the display OPLE lens, from the Gauss lens law the final reconstructed image is obtained at z=-zr where
As zo varies from ∞ to 0, zr changes from rRpRd /(rRp +Rd ) to 0.
4. Other system factors that influence 3-D image depth and size
4.1 The use of a modified pickup system
Because the physical size of the 2-D image sensor is smaller than that of the pickup lenslet array, a modified pickup system is usually used as depicted in Fig. 4(a). Here, elemental images formed by a planar lenslet array are detected through a camera lens with a large f/#. The use of such a camera lens and the planar pickup lenslet array produces the effect of a negatively curved pickup lenslet array, because disparity of elemental images increases. We have to take this effect into account, considering the modified pickup system as a curved pickup system with a curved lenslet array whose radius of curvature is -Rc . Rc equals approximately the distance between the planar pickup lenslet array and the camera lens.
Therefore, if elemental images are detected through a camera lens when we use a curved pickup lenslet array with the radius of curvature Rp as depicted in Fig. 4(b), the actual radius of curvature of the pickup lenslet array is considered to be
This is the equivalent planar pickup devices with two OPLE lenses. In this case, we have to replace Rp with in Eq. (5).
4.2 Diverging projection of elemental images
When elemental images are projected onto a micro-convex-mirror array screen, the projection beam angle θ (e.g., in the azimuthal direction) may not be negligible. In this case, the effect of negatively curved display devices naturally exists even if planar display devices are used, as depicted in Fig. 5(a). Suppose that the horizontal size of overall projected elemental images on the screen is S. Then, one can consider the planar display devices as curved display devices with a radius of curvature -Rs ≈-S/θ if the aperture size of the relay optics is much smaller than S. In fact, Rs is approximately equal to the distance between the planar projection screen and the relay optics.
Suppose that we use such a diverging projection system in a negatively curved lenslet array with the radius of curvature -Rd as depicted in Fig. 5(b) or in a negatively curved micro-convex-mirror array as in Fig. 5(c). The actual radius of curvature of the display screen in the non-diverging system is:
In this case, we have to replace Rd with in Eq. (5).
5.1 System description
The object to be imaged is composed of small cacti and a large building as shown in Fig. 6(a). The distance between the pickup lenslet array and the cacti is approximately 20 cm and that between the pickup lenslet array and the building is approximately 70 m. Because curved pickup devices are not available, elemental images are obtained by use of a planar 2-D image sensor and a planar lenslet array in contact with a large-aperture negative lens as an OPLE lens. The focal length and the diameter of the negative lens are 33 cm (=Rp ) and 7 cm, respectively. The planar pickup lenslet array we used is made from acrylic, and has 53×53 plano-convex lenslets. Each lenslet element is square-shaped and has a uniform base size of 1.09 mm×1.09 mm, with less than 7.6 µm separating the lenslet elements. The focal length of the lenslets is approximately 3 mm (=fp ). A total of 48×36 elemental images are used in the experiments.
A digital camera with 4500×3000 CMOS pixels was used for the 2-D image sensor. The camera pickup system is shown in Fig. 6(b). In this modified pickup system, Rc ≈20 cm. From Eq. (6), =Rc =20 cm, when the OPLE lens is not used; and =12.5 cm, when the OPLE lens is used.
We also used the linear depth reduction method in combination with the nonlinear method. To avoid resolution degradation caused by digital zoom-in, we kept the resolution of the zoom-in elemental images higher than that of the LCD projector. Four different α’s are used: α 1=1, α 2=1.5, α 3=2, and α 4=2.5. A planar micro-convex-mirror array for the projection screen was obtained by coating the convex surface of a lenslet array that is identical to the pickup lenslet array. Light intensity reflectance of the screen is more than 90 %. The focal length of each micro-convex mirror is 0.75 mm (=fd ) in magnitude. Because fp =3 mm, linear depth squeezing rates are r 1=1/4, r 2=1/6, r 3=1/8, and r 4=1/10 from Eq. (2) for α 1, α 2, …, α 4, respectively.
The setup for 3-D image reconstruction is depicted in Fig. 7. A color LCD projector that has 3 (red, green, and blue) panels was used for elemental image projection. Each panel has 1024×768 square pixels with a pixel pitch of 18 µm. Each elemental image has approximately 21×21 pixels on average. Magnification of the relay optics is 2.9. The diverging angle of the projection beam θ is approximately 6 degrees in the azimuthal direction. The effect of curved display devices slightly exists. The distance between the screen and the relay optics is approximately 48 cm. Because S=52.3 mm, Rs ≈50 mm. From Eq. (7), =50 cm, because Rd =∞ in the experiments.
The position of cacti is denoted by zoc (=20 cm) and that of the building by zob (=70 m). For different r’s and Rp ’s, we can estimate the position of the reconstructed image for the cacti z=-zrc and that for the building z=-zrb from Eq. (5). They are illustrated in Table 1.
5.2 Experimental results
Center parts of elemental images that were obtained without the OPLE lens and those obtained with the OPLE lens are shown in Fig. 8(a) and 8(b), respectively. When α=2.5, digitally zoomed-in elemental images for those in Fig. 8(a) and 8(b) are illustrated in Fig. 8(c) and 8(d), respectively. One can see that the OPLE lens increases disparity between neighboring elemental images.
When elemental images are projected onto the planar micro-convex-mirror array, 3-D orthoscopic virtual images are reconstructed. The measured viewing angle was 60~70 degrees, which agrees well with the predicted value. To observers who move beyond the viewing angle range, the entire reconstructed image disappears. Higher-order reconstructed images were hardly observed for a well-aligned system. Left, upper, and right views of reconstructed 3-D images for different depth control parameters are illustrated in Fig. 9 and 10. The observed positions of the reconstructed images agree qualitatively with the estimated positions given in Table 1. Comparing the images shown in Fig. 9 and 10, one can see that smaller 3-D images are reconstructed for shorter . As r decreases, reconstructed 3-D images squeeze further in the longitudinal direction and thus disparity between left and right views reduces. The lateral size of reconstructed 3-D images is independent of r. Reconstructed 3-D images at deeper positions are more blurred because the depth-of-focus of the PII system is limited, which is estimated to be 5 cm approximately.
6. Discussion and conclusion
Binocular parallax is the most effective depth cue for viewing medium distances. In general, our depth control method degrades solidity of reconstructed 3-D images because it squeezes their longitudinal depth more excessively than the lateral size for distant objects. However, human vision also uses other depth cues, and binocular parallax may not be so effective for viewing long distances. Therefore, our nonlinear position control method can be efficiently used for large-scale 3-D display system with limited depth-of-focus. Nevertheless, efforts to enhance the depth-of-focus of II systems should be pursued.
In conclusion, we have presented a method to control depth and lateral size of reconstructed 3-D images in II, in which a curved pickup lenslet array or a curved micro-convex-mirror (display lenslet) array or both may be used. When lenslets in the curved array have a zooming capability, a linear depth control is additionally possible. Using both control methods, we have shown that large objects in far distances can be reconstructed efficiently by the II system with limited depth-of-focus. This control will be useful for realization of 3-D television, video, and movie based on II.
*This paper is dedicated to the memory of Dr. J. S. Jang (1961-2004). We thank Fushou Jin, Myungjin Cho, and Young-Wook Song of Pukyong National University for their helpful discussions.
References and links
1. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, Bellingham, WA, 2001).
2. T. Okoshi, “Three-dimensional display,” Proc. IEEE 68, 548–564 (1980). [CrossRef]
3. A. R. L. Travis, “The display of Three-dimensional video images,” Proc. of IEEE 85, 1817–1832 (1997). [CrossRef]
6. P. Ambs, L. Bigue, R. Binet, J. Colineau, J.-C. Lehureau, and J.-P. Huignard, “Image reconstruction using electrooptic holography,” Proceedings of the 16th Annual Meeting of the IEEE Lasers and Electro-Optics Society, LEOS 2003, vol. 1 (IEEE, Piscataway, NJ, 2003) pp. 172–173.
7. G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).
8. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]
9. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]
10. N. Davies, M. McCormick, and M. Brewin, “Design and analysis of an image transfer system using microlens arrays,” Opt. Eng. 33, 3624–3633 (1994). [CrossRef]
12. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998). [CrossRef]
14. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging with nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]
16. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]
17. Y. Kim, J. Park, H. Choi, S. Jung, S. Min, and B. Lee, “Viewing-angle-enhanced integral imaging system using a curved lens array,” Opt. Express 12, 421–429 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-3-421. [CrossRef]
18. J.-S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging using lenslets with non-uniform focal lengths and aperture sizes,” Opt. Lett. 28, 1924–1926 (2003). [CrossRef]
19. J.-S. Jang, Y.-S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high- resolution three-dimensional display,” Opt. Express 12, 557–563 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-4-557. [CrossRef]
20. J.-S. Jang and B. Javidi, “Three-dimensional projection integral imaging using micro-convex-mirror arrays,” Opt. Express 12, 1077–1083 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-6-1077. [CrossRef]
21. J. S. Jang and B. Javidi, “Very-large scale integral imaging (VLSII) for 3D display,” to appear in the Journal of Optical Engineering, (2005).