Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display

Open Access Open Access

Abstract

We present a projection method in integral imaging for large-scale high-resolution three-dimensional display. In the proposed method, the entire set of high resolution elemental images with a large number of pixels is spatially divided into smaller image subsets. Then they are projected separately onto the corresponding lenslet array positions either simultaneously or in a sequence faster than the flicker fusion frequency of human eyes or both (i.e., spatiotemporal multiplexing). Thus display panels that do not have enough pixel numbers can be used to display the entire elemental images with a large number of pixels. Preliminary experiments were performed using a galvanometer-based optical scanner.

©2004 Optical Society of America

1. Introduction

Three-dimensional (3-D) imaging and display have been subjects of much research due to their diverse benefits and applications [15]. A stereoscopic technique has been most widely used to achieve 3-D displays so far, because it is relatively easy to realize a stereoscopic system that can display large images with high resolution [1]. However, stereoscopic techniques usually require supplementary glasses to evoke 3-D visual effect to observers and provide observers with only horizontal parallax and a limited number of viewpoints. Observation of stereoscopic images may cause visual fatigue because of convergence-accommodation conflict [3].

For display and visualization of more naturally looking 3-D images in space with incoherent light, integral imaging (II), or real-time integral photography, has been studied [610]. In II, 3-D images are formed by integrating the rays coming from two-dimensional (2-D) elemental images using a lenslet (or pinhole) array. II can provide observers with true 3-D images with full parallax and continuous viewing points, as in holography. However, there are also drawbacks in II. For example, the viewing angle, depth-of-focus, and resolution of 3-D images are limited, because lenslet arrays are used [11,12]. A number of techniques have been presented to overcome those limitations [13,14].

There are also device limitations in II. For large-scale high-resolution II, both the 2-D light sensitive device and the 2-D display panel should have a very large number of pixels. Devices that meet such a requirement will not be available in the near future. Recently, a method to pick up high resolution elemental images by multiplexing was studied, in which a CCD sensor with a small number of pixels was used [15]. However, to the best of our knowledge, there was no study to display such elemental images with a large number of pixels using a small-size display panel for large-scale high-resolution 3-D II in real time.

In this paper, we propose a practical integral imaging projector, which displays elemental images with a large number of pixels using spatiotemporal multiplexing. Here, space multiplexing means that the entire set of elemental images is spatially divided into small image subsets, and then they are projected simultaneously onto the corresponding lenslet array positions using multiple display devices. Temporal multiplexing means that the divided elemental image subsets are sequentially projected onto the corresponding lenslet array positions faster than the flicker fusion frequency of human eyes using one display device. The spatiotemporal multiplexing results in an increase in the number of pixels in a display panel. The proposed method can be used for a practical large-scale high-resolution 3-D display using II. To demonstrate our idea, preliminary experiments were performed using a galvanometer-based optical scanner.

In fact, projection schemes were widely used in stereoscopic displays [16]. Although this is not the case in II, there was a prior study on a projection type of integral imaging, which focused mainly on all-optical implementation [17]. Here, we pursue large-scale 3-D II by adopting spatiotemporal multiplexing.

 figure: Fig. 1.

Fig. 1. Pickup and display in 3-D integral imaging.

Download Full Size | PDF

2. Integral imaging: Review

A conventional II system is depicted in Fig. 1. A set of elemental images of a 3-D object (i.e., direction and intensity information of the spatially sampled rays coming from the object) are obtained by use of a lenslet array and a 2-D image sensor such as a CCD or a CMOS image sensor. To reconstruct a 3-D image of the object, the set of 2-D elemental images are displayed in front of a lenslet array using a 2-D display panel, such as a liquid crystal display (LCD) panel. The rays coming from the elemental images converge to form a real 3-D image through the lenslet array. This 3-D image is a pseudoscopic (depth-reversed) image of the 3D object. The pseudoscopic real image is converted into an orthoscopic virtual image when every elemental image is rotated by 180 degrees around its own center optic axis [10]. It is also possible to display an orthoscopic real image by introducing an additional imaging lens in front of the pickup lenslet array [18]. Computer pickup and computer reconstruction are also studied. In computer pickup, elemental images of a 3-D object are calculated in a computer by simulating the pickup process and 3-D images are reconstructed directly in an optical system [12,19]. In computer reconstruction, elemental images are obtained by direct pickup using a lenslet array and 3-D images are reconstructed in a computer by simulating the optical reconstruction process [20,21].

The full viewing angle ψ is limited and determined approximately by 2×arctan[0.5/(f/#)], where f/# is the f number of a lenslet [10,12,14]. For example, even if f/# is as low as 1, ψ is limited by ~50 degrees. This means that the viewing region where the entire 3-D image can be seen is restricted, if a wide 3-D image is displayed. There were a few studies to enhance the viewing angle by introducing some optical system modifications, which may not be practical in a large-scale II system. To increase the viewing angle, the use of an II projector with a micro-concave-mirror array instead of the lenslet array may be useful [22], because it is relatively easy to make concave mirrors with a very small f number.

3. Integral imaging projector using spatiotemporal multiplexing of elemental images

3.1 Need of multiplexing

The product of depth-of-focus and lateral resolution square (PDLRS) of 3-D images in II is limited by 1/λ, where λ is the illumination wavelength [13]. This means that if we display a 3-D image with large depth, we have to sacrifice the resolution, and vice versa. It is difficult to obtain 3-D image resolution that is 1 or 2 lines/mm, when the 3-D image depth is around 2 or 4 m in the visible wavelength range. Although the absolute image resolution is low, the viewing resolution can be improved by increasing the distance between observers and the 3-D image. In this sense, II is suitable for a large-scale 3-D image display, such as a roof-top display which is usually observed tens or hundreds of meters away. In this case, an important parameter that should be considered for image quality is the total number of volume pixels (voxels) in 3-D images to be displayed, which is the number of 2-D image pixels (Nx×Ny) in lateral dimensions multiplied by the number of longitudinal pixels (Nz).

 figure: Fig. 2.

Fig. 2. Integral imaging projector with spatial multiplexing.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. System setup for integral imaging with temporal multiplexing.

Download Full Size | PDF

To produce high quality 3-D images with a large depth, elemental images with a large number of pixels are required. Suppose we display a 3-D image with a total number of voxels Nx×Ny×Nz≈103×103×103. Then, it is obvious that we need a display panel with more than m×109 pixels, where m is a ray multiplicity factor to form a voxel. This is because each voxel in II is determined by a crossing point of multiple rays, each of which is coming from a pixel of distinct elemental images in the display panel through the corresponding lenslet. The ray multiplicity factor m for a given voxel is determined by both the longitudinal distance between the voxel and the lenslet array and f/# of the lenslet. However, it seems that 2-D display panels with more than 109 pixels will not be available in the near future.

Note that device limitations are not so serious in stereoscopic displays, because two (or at most a few) high resolution 2-D images (i.e., left-eye and right-eye images) are enough. Nevertheless spatial multiplexing can be used for higher image resolution in stereoscopic displays [23].

3.2 Projection type of II with spatiotemporal multiplexing

A projection type of II with spatial multiplexing can be a possible solution to the pixel number problem. In spatial multiplexing, many display panels (or 2-D projectors) are used for the entire elemental image display as depicted in Fig. 2. Each display panel (or 2-D projector) projects only a subset of entire elemental images onto the corresponding area of a semi-transparent screen, such as a diffusion plate with fine grains. The screen plays a role of the display panel behind the lenslet array as depicted in Fig. 1. In fact, the diffusion screen is not a necessary device (often, not desirable, because elemental images projected on the screen are magnified through the lenslet array, and thus the image of grains in the screen are also magnified and added to reconstructed 3-D images), if the projection angle θ is close to zero. To alleviate the need of a large number of display panels or 2-D projectors, we also adopt temporal multiplexing in displaying elemental images. In temporal multiplexing, each display panel (or projector) projects multiple subsets of elemental images onto the corresponding area of the screen (or the lenslet array) sequentially in time domain. The projection speed should be faster than the flicker fusion frequency of human eyes. Figure 3 shows a setup for spatiotemporal multiplexing, in which only one projector is used with a 2-D galvanometer optical scanner for simplicity. As x and y mirrors of the galvanometer scanner change their angles, the panel in the 2-D projector displays a proper part of elemental images accordingly. The whole system should be controlled by, for example, a personal computer (PC). The lens contacted with the diffusion plate can be introduced to equalize optical path lengths from the galvanometer scanner to every projection position in the lenslet array. The use of this lens also makes beams of elemental image subsets incident normally on the lenslet array. The focal length of the lens to equalize the optical path lengths should be equal to the distance between the galvanometer scanner and the lenslet array.

3.3 Experiments

Experiments on temporal multiplexing will be sufficient to demonstrate spatiotemporally multiplexed 3-D II. The projection setup depicted in Fig. 3 is used. A 3-D object to be displayed is synthesized in a computer, and is composed of a red toroid, a green sphere, and a blue kettle as shown in Fig. 4. They are separated by approximately 2 cm along the longitudinal direction. The display lenslet array has 53×53 lenslets. Each lenslet element is square-shaped and has a uniform base size of 1.09 mm×1.09 mm, with less than 7.6 µm separating the lenslet elements. The focal length of the lenslets is approximately 3 mm. For a 2-D projector, a color LCD projector that has 3 (RGB) panels was used. Each panel has 1024×768 (i.e., the number of pixels in horizontal direction × that in vertical direction) square pixels with a pixel pitch of 18 µm. The magnification factor of the relay optics in Fig. 3 is set to 1. We used 60×60 pixels to represent one elemental image. This means that 17×12 lenslets (or elemental images) can be used to display the 3-D object without multiplexing of elemental images.

Using the object in Fig. 4, elemental images for orthoscopic virtual 3-D image display are calculated in a computer according to ray optics as shown in Fig. 5. The total number of pixels in all the elemental images is 2304×2048. Because the number of pixels in each elemental image is 60×60, 38×34 lenslets (or elemental images) are used for the 3-D image display. The entire elemental images shown in Fig. 5 are divided into 3×2 subsets, each of which contains 768×1024 pixels. In fact, when the image of the 2-D LCD panel is projected onto the lenslet array through our galvanometer scanner, the image rotates by 90 degrees because of x and y mirrors. So, every subset of elemental images with 768×1024 pixels should be rotated by -90 degrees before the display for compensation. This compensated image has 1024×768 pixels, which fits in the LCD display panel.

 figure: Fig. 4.

Fig. 4. Computer-synthesized 3-D objects to be displayed.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Calculated entire elemental images with 2304×2048 pixels.

Download Full Size | PDF

The six elemental image subsets are displayed in a sequence as shown in Fig. 6, although there are many possible sequences. The distance between the galvanometer scanner and the lenslet array is approximately 40 cm. Because the optical path lengths from the galvanometer scanner to six projection positions in the lenslet array are not significantly different, we did not use the lens to equalize the optical path length. For high light efficiency, we did not use the diffusion plate in our system, in which the projection angle θ is negligible.

The six elemental image subsets are stored in a PC. Synchronization between the sequential display of six elemental image subsets and corresponding projection positions of the galvanometer scanner (Cambridge Technology 6450 model) are controlled by the same PC using a visual C language program. There was no noticeable elemental image overlap problem in the experiment, because the short term repeatability of the galvanometer scanner is accurate enough (~1×10-6 degrees). The projected elemental image subsets did not show any edge blurring and intensity variation that we can notice. Time required to complete one cycle of the projection for six elemental image subsets is approximately 1 second in our current system. This low speed is caused mainly by the display speed of the video card in the PC we used. So the 3-D image was detected with a CCD camera and averaged in another PC to simulate the afterimage effect in the human eye. The results for two different camera positions are shown in Fig. 7. Although indirect camera capture was used, our approach was successfully demonstrated.

 figure: Fig. 6.

Fig. 6. Projection sequence of six elemental image subsets.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Reconstructed 3-D images: Front view (left) and right view.

Download Full Size | PDF

4. Discussion and conclusion

To increase the projection speed up to the flicker fusion frequency of the human eye (approximately 50 Hz), a parallel processing method to update elemental images at the video card in a PC should be used. The response time of currently available LCD panels are around 10 ms (~100 Hz in frame rate). Thus, as the number of elemental image subsets to be temporally multiplexed increases, faster display panels are needed. In the experiment, the maximum excursion angle of the galvanometer scanner is approximately 4 degrees, and the scanning frequency of up to ~200 Hz was possible in our system. For faster scanning, acousto-optic modulators and polychromatic coherent light may be used.

In conclusion, we have presented a projection II method, in which elemental images with a large number of pixels can be displayed using commercially available display panels with small number of pixels. The proposed technique is based on spatiotemporal multiplexing of elemental images. With this method, it is possible to achieve large-scale high-resolution 3-D II in principle. To demonstrate the feasibility of our method, preliminary experiments were performed.

Acknowledgments

This work was supported in part by Korea Science and Engineering Foundation grant R05-2003-000-10968-0.

References and links

1. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, Bellingham, WA, 2001).

2. D. H. McMahon and H. J. Caulfield, “A technique for producing wide-angle holographic displays,” Appl. Opt. 9, 91–96, (1970). [CrossRef]   [PubMed]  

3. T. Okoshi, “Three-dimensional display,” Proc. IEEE 68, 548–564 (1980). [CrossRef]  

4. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]   [PubMed]  

5. P. Ambs, L. Bigue, R. Binet, J. Colineau, J.-C. Lehureau, and J.-P. Huignard, “Image reconstruction using electrooptic holography,” Proceedings of The 16th Annual Meeting of the IEEE Lasers and Electro-Optics Society, LEOS 2003, vol. 1 (IEEE, Piscataway, NJ, 2003) pp. 172–173.

6. G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

7. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]  

8. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]  

9. N. Davies, M. McCormick, and M. Brewin, “Design and analysis of an image transfer system using microlens arrays,” Opt. Eng. 33, 3624–3633 (1994). [CrossRef]  

10. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]   [PubMed]  

11. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998). [CrossRef]  

12. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus using real and virtual image fields,” Opt. Lett. 28, 1421–1423 (2003). [CrossRef]   [PubMed]  

13. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging with nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

14. S.-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4195 (2003). [CrossRef]   [PubMed]  

15. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40, 5592-(2001). [CrossRef]  

16. B. Javidi and F. Okano, eds., Three Dimensional Television, Video, and Display Technologies (Springer, Berlin, 2002).

17. J.-S. Jang and B. Javidi, “Real-time all-optical three-dimensional integral imaging projector,” Appl. Opt. 41, 4866–4869 (2002). [CrossRef]   [PubMed]  

18. J.-S. Jang and B. Javidi, “Formation of orthoscopic three-dimensional real images in direct pickup one-step integral imaging,” Opt. Eng. 42, 1869–1870 (2003). [CrossRef]  

19. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computergenerated integral photography,” in Stereoscopic Display and Virtual Reality Systems VIII , A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE 4296, 187–195 (2001).

20. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]  

21. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef]   [PubMed]  

22. Y. Jeong, S. Jung, J.-H. Park, and B. Lee, “Reflection-type integral imaging scheme for displaying three-dimensional images,” Opt. Lett. 27, 704–706 (2002). [CrossRef]  

23. G. Bresnahan, R. Gasser, A. Abaravichyus, E. Brisson, and M. Walterman, “Building a large-scale high-resolution tiled rear-projected passive stereo display system based on commodity components,” in Stereoscopic Displays and Virtual Reality Systems X, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE5006, 19–30 (2003).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Pickup and display in 3-D integral imaging.
Fig. 2.
Fig. 2. Integral imaging projector with spatial multiplexing.
Fig. 3.
Fig. 3. System setup for integral imaging with temporal multiplexing.
Fig. 4.
Fig. 4. Computer-synthesized 3-D objects to be displayed.
Fig. 5.
Fig. 5. Calculated entire elemental images with 2304×2048 pixels.
Fig. 6.
Fig. 6. Projection sequence of six elemental image subsets.
Fig. 7.
Fig. 7. Reconstructed 3-D images: Front view (left) and right view.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.