Abstract

In a three-dimensional display scheme based on integral imaging, the mismatch of the system parameters between the pickup and display systems or between the display systems is an important issue from a practical point of view. In this paper, we propose a method that provides excellent flexibility to the integral imaging system parameters and display conditions. In the proposed method, elemental images obtained in the pickup process are digitally analyzed and full three-dimensional information of the object is extracted. The extracted three-dimensional information is then transmitted to each display system and modified, so as to be suitable for the display conditions and the system parameters of the display system. Finally elemental images are generated with the modified three-dimensional information for the display system and integrated in the form of a three-dimensional image.

©2004 Optical Society of America

1. Introduction

Integral imaging has attracted a great deal of attention in recent years [1]–[14]. Full color, full parallax moving three-dimensional (3D) images of integral imaging distinguish the technology from conventional stereoscopic displays that are based on binocular parallax. Compact system configuration is another advantage of integral imaging which makes it a practical alternative to other volumetric or holographic 3D displays.

One of the crucial issues for the practical implementation of an integral imaging system is the mismatch of the system parameters, especially the lens arrays. An integral imaging system consists of pickup and display parts as shown in Fig. 1(a). In the pickup part, an object is imaged through a lens array. Each elemental lens forms a corresponding image of the object, called an elemental image, and these elemental images are captured by a charge-coupled device (CCD) camera. In the display part, the captured elemental images are displayed on the display panel and integrated by the lens array into a 3D image at the same depth as the original object was located. In this configuration, the specifications of lens arrays used in the pickup and display parts should be same, if elemental images are to be integrated correctly. If we use a different lens array in the display part, as shown in Fig. 1(b), the elemental images cannot be integrated properly and images that is displayed is not meaningful. This restriction of the lens array severely limits the utility of a 3D display based on integral imaging. Since we cannot expect all display systems to have lens arrays of the same specifications, it becomes necessary to prepare various sets of the elemental images for different lens arrays by repeating pickup processes numerous times with various lens arrays, which is obviously impractical. This is especially important in 3D television broadcasting systems where a television set at the receiver end would likely have a different panel size and, hence, have a lens array consisting of elemental lenses with a different number or different pitches. Moreover the desired specifications of the lens array are different for the pickup and display. In the pickup process, the lens array that has a small overall size, implying a small elemental lens pitch and small number of elemental lenses, would be preferred to make the overall pickup system compact comparable to common two-dimensional (2D) camera systems. On the other hand, in the display process, a lens array with a large overall size is preferred for the display of large 3D images. This conflicting need for the lens arrays between the pickup and display processes leads to severe restrictions regarding lens array.

 figure: Fig. 1.

Fig. 1. Concept of integral imaging and lens array mismatch (a) when the same lens arrays are used in the pickup and display (b) when different lens arrays are used in the pickup and display

Download Full Size | PPT Slide | PDF

In order to alleviate the lens array mismatch problem, the use of elemental image shifting [15] or ratio-conserving scaling [16] have been proposed. Elemental image shifting avoids the lens array mismatch problem by shifting each elemental image with respect to its center. In elemental image shifting, however, the integrated image is compressed or expanded in the depth direction. Consequently the integrated image is not the same as the original object. In the case of the ratio-conserving scaling algorithm, each elemental image is magnified or reduced by a factor determined by the system parameters of the pickup and display systems. With magnified or reduced elemental images, a 3D image with the same longitudinal-transverse ratio as the original object can be integrated. The ratio-conserving scaling algorithm, however, generally locates the integrated 3D image out of the range where it can be integrated to achieve the best resolution by the display system. Moreover the ratio-conserving scaling algorithm cannot cope with a change in the number of elemental lenses.

In this paper, we propose a novel scheme incorporating the extraction of 3D information between the pickup and display processes. Figure 2 shows the concept of the proposed method.

In the proposed method, 3D information of the object is extracted from elemental images obtained in the pickup process and the extracted 3D information of the object is then transmitted to each display system. At each display system, the 3D profile of the object such as thickness, depth, transverse size can be modified for the specific needs of that display system. The elemental images are finally generated for modified 3D information and given the system parameters of the display system. Therefore the proposed method completely eliminates the restriction from the mismatch of the system parameters. Since the full 3D information of the object can be modified by digital processing, excellent flexibility on the 3D profile of the displayed 3D image can be achieved for given set of the elemental images. The pseudoscopic image problem of integral imaging is also solved naturally by inverting the depth of the object. Though some parts of the proposed method - depth extraction and the generation of elemental images - have been reported [4], [11], [17]–[19] separately, this is, to the authors’ knowledge, the first report on a framework comprising them as well as the digital manipulation of the extracted 3D information and an experimental verification of the resultant alleviation of the lens array mismatch problem along with excellent flexibility of the 3D profile of the displayed 3D image.

 figure: Fig. 2.

Fig. 2. Concept of the proposed method

Download Full Size | PPT Slide | PDF

In the following, the extraction of the 3D information and conversion to the elemental images with the 3D profile modified appropriately is discussed. We demonstrate the extraction and conversion to elemental images and integrate them as a 3D image experimentally.

2. Description of the proposed method

2.1 Extraction of 3D information from elemental images

In the integral imaging pickup process, elemental images are formed by the elemental lenses in the lens array and captured by the CCD. Each elemental image is actually a perspective of the 3D object at the specific viewing direction determined by the relative position of the corresponding elemental lens to the object. Therefore the 3D information on the object is stored in the form of a set of the elemental images in the pickup process. By analyzing the elemental images, 3D information on the object can be extracted.

Figure 3 shows the geometry of the pickup part of the integral imaging system. The object point (y,z) is imaged by q-th elemental lens at y i,q in the elemental image plane and captured at y c,q in the CCD plane. The position y c,q of the q-th elemental image of the object point (y,z) is given by

yc,q=fafc(qφy)lzfcqφl,

where fa is focal length of the elemental lens, fc the focal length of the camera lens, φ the elemental lens pitch, and l the distance between the focal plane of the lens array and the camera lens. The difference in position, or disparity, between the q 1-th and q 2-th elemental images (q 2>q 1) becomes

dq1,q2=yc,q1yc,q2=(q2q1)fcfcφl(1z+1fa)=(q2q1)d,

where d denotes the fundamental disparity as shown in Fig. 4. Since fa , fc , l and φ are known parameters of the pickup system, the depth information z of the object can be extracted by detecting the disparity d. Once the depth z is known, its transverse information y can also be extracted using Eq. (1) and thus full 3D information on the object is obtained.

 figure: Fig. 3.

Fig. 3. Geometry of the integral imaging pickup part

Download Full Size | PPT Slide | PDF

Note that, in the simple configuration shown in Fig. 3, the pickup directions of the elemental images are not parallel so that the integration of those elemental images is accompanied by distortion in the display process [20]. To make the pickup directions parallel in a conventional integral imaging system, it is necessary to use an additional convex lens that is sufficiently large to cover the lens array. In the proposed method, however, the simplified integral imaging pickup system that is shown in Fig. 3 can be easily used, since the elemental images obtained in the pickup system are not directly used for the display but analyzed for the extraction of the 3D information. No variation in the pickup configuration shown in Fig. 3 or supplementary devices for optical pseudoscopic-orthoscopic conversion is needed in the proposed system, as explained in the following section.

In the extraction of the 3D information, depth is the first information extracted and is found by detecting the disparity given by Eq. (2). The detection of the disparity between two or more perspectives was developed in the field of machine vision or computer vision decades ago. Such techniques are typically called stereo matching algorithms [21]–[23]. These stereo matching algorithms can be directly applied to the elemental images obtained in the pickup process because the elemental images are set of different perspectives of the object [17]. The main difference between the usual stereo matching configuration and the integral imaging pickup system is that stereo matching utilizes two or more cameras to capture the perspective of the object, while an integral imaging pickup system incorporates a lens array and a single camera. Therefore the integral imaging pickup system provides many more perspectives (as many as the number of the elemental lenses) than the usual stereo matching system. Since all perspectives are captured by one camera, however, the resolution of each perspective is typically low. Generally, the accuracy of disparity detection depends on the number of the input images and the resolution of each. Hence, two counteracting characteristics of the integral imaging pickup system - large number of perspectives, low resolution of each perspective - determine the reliability of disparity detection and the extracted 3D information on the object.

 figure: Fig. 4.

Fig. 4. Disparity of the elemental images

Download Full Size | PPT Slide | PDF

In addition to the low resolution of each elemental image, the narrow field of view of the elemental lens is another restricting factor in the acquisition of the 3D information of an object. The field of view of each elemental lens is given by

ρ2tan1(φl2(l+fa)fa).

When the lateral dimension of the object exceeds the field of view of the elemental lens, 3D information on the object can be fully found by collecting the partial 3D information from each reference elemental image and combining them. In this case, however, a small error in depth extraction from each reference elemental image can result in large abrupt discontinuities and distortion in the reconstructed image. Therefore an object with a small lateral dimension is desirable and for a larger one, a more sophisticated technique is required.

Some techniques have been proposed to fully utilize the large number of perspectives obtained in the integral imaging pickup process and overcome its low resolution and narrow field of view. One involves the generation of sub-images from the elemental images obtained in the pickup process [18]. The sub-image is an image consisting of pixels at the same position in every elemental image. Disparity detection is performed on the sub-images instead of the elemental images. Using this technique, the low resolution and narrow field of view of elemental images can be overcome to some extent. Since the resolution of the generated sub-image is determined by the number of elemental images, this technique is suitable for an integral imaging system in which the lens array has a very small elemental lens pitch and a large number of elemental lenses. In another technique, sub-images are formed only along one direction [19]. Hence the generated images have different disparity characteristics along the two directions. This different disparity characteristic increases the detectable range of the depth information of the object and eliminates ambiguities in disparity detection.

2.2 Modification of the extracted 3D information

The acquired 3D information of the object is stored and transmitted to each display system. In the display system, given the 3D information, the 3D profile of the object such as thickness, depth, transverse size, and so on can be modified so as to be suitable for the parameters of the specific display system and the display conditions. In the integral imaging display system, the 3D profile of the integrated image that can be displayed with tolerable resolution depends on the system parameters. For example, in a display system with a lens array with a large elemental lens pitch, the resolution of the integrated image is good but a thick image cannot be displayed due to the small depth of focus. The distance, or depth, of the integrated image from the lens array also should be selected based on the focal length and the elemental lens pitch of the lens array. If a 3D image is integrated too far from the lens array compared to the focal length, the resolution of the integrated image becomes severely degraded due to the large magnification of the pixels of the display panel through the lens array. Therefore each display system has its own appropriate depth range and image size for integrating a 3D image with the best quality and it is important to maintain the 3D profile of the object in that condition. Since, in the proposed method, the 3D profile of the object can be modified arbitrarily, these specific needs for each display system can be fully satisfied. Consequently the proposed method provides integral imaging systems with excellent flexibility with respect to system parameters.

 figure: Fig. 5.

Fig. 5. Concept of the pseudoscopic image problem

Download Full Size | PPT Slide | PDF

One thing that should be noted is that the proposed method can solve the pseudoscopic image problem of the integrated image in natural way. In the pseudoscopic image problem, the depth of the integrated image is inverted due to the opposite observation directions in pickup and display processes, as shown in Fig. 5. Conventional integral imaging systems avoid the pseudoscopic image by rotating each elemental image with respect to the center of each elemental image region by means of a gradient index lens array [24] or by digital image processing [20]. The integrated image using rotated elemental images is located behind the lens array with an inverted depth, thus solving the pseudoscopic image problem. However this conventional method is accompanied by the distortion of the integrated image in the depth direction. Let us assume that the pickup system consists of a lens array and a film for simplicity, as shown in Fig. 6(a).

The position of the elemental image y f,p of the object point at (y,z c +b) is given by

yf,q=gp(qφy)(zc+b)+qφ,

where zc is the central depth plane of the object and gp is the gap between the lens array and the film. After the rotation of each elemental image, the elemental image position yf,q becomes

yf,q=gp(qφy)(zc+b)+qφ.

In the display system, the rotated elemental image is imaged by the corresponding elemental lens at (y i,q, zc +b ) behind the lens array as shown in Fig. 6(b) and y i,q is given by

yi,q=qφ(1gp(zc+b)gd(zc+b))+ygp(zc+b)gd(zc+b),

where gd is the gap between the lens array and the display panel. In order for the elemental images to be integrated, the first parenthesis should be zero. Assuming that the gaps in the pickup system gp and in the display system gd are adjusted so that the central depth planes of the object and integrated image are in focus, gp =zcfa /(zc -fa ), gd =zcfa /(zc +fa ),

1zc(zc+fa)(zc+b)zc(zcfa)(zc+b)=0

should be satisfied. If we set b =0 when b=0, we obtain

zc=zc2fa,
b=zc2fazcb,

and by substituting Eqs. (8) and (9) to Eq. (6) we obtain

yi,q=y.

Equations (9) and (10) indicate that the transverse magnification is 1 while the longitudinal magnification is (zc -2fa )/zc . Hence, though pseudoscopic image problem is solved, the integrated image is reduced along the depth direction compared with the original object. Therefore in the conventional integral imaging system, the integrated 3D image should be located behind the lens array and it is impossible to integrate orthoscopic 3D image without depth distortion. In the proposed method, however, the pseudoscopic image problem is easily solved without any distortion. Since 3D information of each voxel of the object is extracted, the orthoscopic image is easily generated by inverting the depth of the voxels with respect to the central depth plane of the object. There is no distortion in any direction, since the depth information of voxels is accessed directly.

 figure: Fig. 6.

Fig. 6. Conventional method for overcoming the pseudoscopic image problem (a) simplified pickup system (b) display as a virtual image

Download Full Size | PPT Slide | PDF

2.3 Generation of the elemental images

The generation of elemental images with 3D information of the object is performed based on the principle of computer-generated integral imaging (CGII) [25]. The elemental image position of each voxel of the object with modified 3D information is calculated by simple geometric optics, on the basis of the system parameters of the integral imaging display system. The calculated elemental image points are accumulated and stored in depth order.

2.4 Quality of the generated 3D image by the proposed method

One drawback of the proposed method is that the quality of the generated 3D image can be degraded. In conventional integral imaging, the object is sampled to produce an elemental image set and this set of elemental images is directly integrated into a 3D image. Hence the sampling process occurs one time. In the proposed method, however, 3D information is extracted from the elemental image set which is already sampled from the object. As a result sampling is performed a second time to produce new set of elemental images from the 3D information of the object after modification. Thus sampling occurs two times in the proposed method, and this additional sampling process may result in a degradation of the quality of the generated 3D image.

In order to alleviate the degradation of image quality, the resolution of the extracted 3D information should be enhanced. Besides interpolation, the super-resolution techniques [26], [27] developed in the field of image processing represent a possible alternative for this purpose. Since super-resolution techniques produce high resolution image from a set of low resolution images, they can be applied to elemental images obtained by integral imaging [28], [29] to enhance the resolution. By extracting 3D information from high resolution elemental images obtained by the application of super-resolution techniques, the image degradation problem of the proposed method can be relaxed.

3. Experimental results

We demonstrated the proposed method experimentally. Two longitudinally separated dice were picked up, their 3D information extracted and modified variously. The elemental images for various lens arrays were then generated and among them, two elemental images for different lens arrays were integrated using corresponding lens arrays to verify the validity of the proposed method. We used the pickup system shown in Fig. 3. Two lens arrays are prepared in the experiment. We picked up the object using lens array 1 and displayed the integrated image using lens array 1 and lens array 2. The specifications of the experimental setup are listed in Table 1.

 figure: Fig. 7.

Fig. 7. The portion of the elemental images captured by CCD

Download Full Size | PPT Slide | PDF

Figure 7 shows a part of the elemental images obtained by the pickup. The elemental image for a die with 5 spots on its face is larger than that for a die with 6 spots because it is closer to the lens array in the pickup. The elemental images were then digitally processed for disparity detection. In the experiment, we applied a multiple-baseline stereo algorithm, a type of intensity-based block matching technique [17], [23]. In the multiple-baseline stereo algorithm, the disparity map on the reference elemental image was obtained by using multiple elemental images collectively. If more elemental images are used in the disparity detection, the accuracy is enhanced but more computation time is required. Although the accuracy of disparity detection relies on the number of elemental images used, it tends to be saturated at a certain level. As a compromise of accuracy and computation time, we used elemental images located on the cross that is centered at the reference elemental image in the experiment. Our experimental result shows little enhancement in the accuracy of depth detection when all elemental images were used.

Tables Icon

Table 1. Specifications of the experimental setup

Figures 8 and 9 show the detected disparity map and the calculated 3D information for the object. In Figs. 8 and 9, xr and yr are the coordinates with respect to the left-upper corner of the reference elemental image. Figure 8(a) shows the detected disparity map for the reference elemental image. We can see that the detected disparity of the die with 6 facing spots is smaller than that of the die with 5 spots. This is a direct result of the fact that the depth of the die with 6 facing spots is larger than that of the die with 5 facing spots in our experiment, since the disparity is inversely proportional to the depth, as given by Eq. (2). We can also see that the detected disparity map contains some irregular noise. This noise in the disparity map can induce much larger errors and discontinuities in the x, y, and z information on the object as Eqs. (1) and (2). In order to reduce the irregular fluctuation in the disparity map, the disparity map was regulated by a disparity gradient constraint that limits the disparity gradient to under a certain value [30]. The regulated disparity map is shown in Fig. 8(b). We can see that the fluctuation is reduced considerably in the regulated version. With the detected disparity map, the x, y, and z information of each voxel of the object is calculated using Eqs. (1) and (2). Figure 9 shows the 3D information obtained. Note that Figs. 9(a)–(c) are depicted with respect to the reference elemental image so that they show x, y, and z information for the object voxel corresponding to each pixel in the reference elemental image, respectively. Considering the real depth of two dice, 100 mm for the die with 5 facing spots and 130 mm for the die with 6 facing spots, we can see that Fig. 9(c) correctly reveals the depth of two dice.

 figure: Fig. 8.

Fig. 8. Detected disparity map (a) initial disparity map (b) regulated disparity map

Download Full Size | PPT Slide | PDF

 figure: Fig. 9.

Fig. 9. 3D information obtained (a) x position (b) y position (c) z position

Download Full Size | PPT Slide | PDF

 figure: Fig. 10.

Fig. 10. Examples of the generated elemental images

Download Full Size | PPT Slide | PDF

We modified the 3D information on the object and generated elemental images with the modified 3D information and the given specifications of the lens array in the display system. The transverse size, thickness, and depth were modified in various manners and the corresponding elemental images were generated. In the generation of the elemental images, the extracted 3D information was interpolated to prevent under-sampling of the elemental images.

Figure 10 shows some examples of the elemental images that were generated. The origin of some of the noise in the generated elemental images is errors in disparity detection. Note that the conventional integral imaging system cannot cope with any change in the specifications of the lens array and 3D information on the object should be fixed. The results shown in Fig. 10 convince us of the excellent flexibility of the proposed method on system parameters and display conditions.

 figure: Fig. 11.

Fig. 11. Integrated image with depth inversion (a) integrated image (b) diffused image by a diffuser at 9.5 cm (c) diffused image by a diffuser at 13 cm

Download Full Size | PPT Slide | PDF

 figure: Fig. 12.

Fig. 12. Integrated image with transverse magnification (a) diffused image by a diffuser at 9.5 cm (b) diffused image by a diffuser at 13 cm

Download Full Size | PPT Slide | PDF

Using the generated elemental images, we integrated the 3D images and some examples are shown Figs. 1113. Figure 11(a) shows the integrated image using lens array 2 and Fig. 11(b) and (c) show its diffused images obtained by locating the diffuser at corresponding positions. The diffuser was used to precisely determine the depth of the integrated image. We moved the diffuser longitudinally to determine the location where the diffused integrated image has the maximum resolution. In the generation of the elemental images used in Fig. 11(a), we inverted the depth of the object with respect to the central depth plane, which is calculated by averaging the detected depth of the two dice. Therefore we can see that the die with 5 facing spots is integrated farther than the die with 6 facing spots although the die with 5 spots was originally located nearer to the lens array in the pickup. Figure 12 shows the diffused image of the 3D image integrated by lens array 1 with transverse magnification by a factor of one and a half. In this case, the size of the two dice images is enlarged compared to the original size shown in Fig. 11, while the depths of the two integrated images are not inverted. Figure 13 shows the integrated image of Fig. 12 observed at different viewing angles without a diffuser. Since the integrated image of a die with 6 facing spots (left die) is located farther from the lens array (closer to the observer) than the die with 5 spots (right die), their relative positions are changed according to the viewing direction, clearly demonstrating the 3D nature of the integrated image.

 figure: Fig. 13.

Fig. 13. Integrated image observed from different directions

Download Full Size | PPT Slide | PDF

4. Conclusion

In this paper, an integral imaging system is proposed involving the extraction of 3D information. The proposed method extracts 3D information on the object from elemental images that are picked up by an integral imaging pickup system, modifies it variously, and generates elemental images for an integral imaging display system with different lens array specifications. Since the extracted 3D information of the object, rather than the original elemental image, is transmitted and processed for a specific display system, the proposed method provides much flexibility with respect to system parameters in the integral imaging pickup and display systems and controllability of the 3D profile of the object. The experimental results confirm the feasibility of the proposed method.

Acknowledgments

This work was supported by the Information Display R&D Center, one of the 21st Century Frontier R&D Programs funded by the Ministry of Science and Technology of Korea.

References and links

1. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

2. S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A. 18, 1814–1821 (2001). [CrossRef]  

3. B. Lee, S. Jung, and J. -H. Park, “Viewing-angle-enhanced integral imaging using lens switching,” Opt. Lett. 27, 818–820 (2002). [CrossRef]  

4. T. Naemura, T. Yoshida, and H. Harashima, “3-D computer graphics based on integral photography,” Opt. Express 8, 255–262 (2001). [CrossRef]   [PubMed]  

5. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40, 5592–5599 (2001). [CrossRef]  

6. H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-927. [CrossRef]   [PubMed]  

7. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11, 1346–1356 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-12-1346. [CrossRef]   [PubMed]  

8. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11, 1862–1875 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-16-1862. [CrossRef]   [PubMed]  

9. S.-H. Shin and B. Javidi, “Speckle reduced three-dimensional volume holographic display using integral imaging,” Appl. Opt. 41, 2644–2649 (2002). [CrossRef]   [PubMed]  

10. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]  

11. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef]   [PubMed]  

12. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-26-3528. [CrossRef]   [PubMed]  

13. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-3-483. [CrossRef]   [PubMed]  

14. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5234–5241 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-21-5237. [CrossRef]  

15. T. Okoshi, “Optimum design and depth resolution of lens-sheet and projection-type three-dimensional displays,” Appl. Opt. 10, 2284–2291 (1971). [CrossRef]   [PubMed]  

16. B. Lee, J.-H. Park, and H. Choi, “Scaling of three-dimensional integral imaging,” in Optical Information Systems, B. Javidi and D. Psaltis, eds., Proc. SPIE5202, 60–67 (2003).

17. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “A new stereovision scheme using a camera and a lens array,” in Algorithms and Systems for Optical Information Processing V, B. Javidi and D. Psaltis, eds., Proc. SPIE4471, 73–80 (2001). [CrossRef]  

18. C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in Stereoscopic Displays and Virtual Reality Systems IX, A. J. Woods, J. O. Merritt, S. A. Benton, and M. T. Bolas, eds., Proc. SPIE4660, 135–145 (2002).

19. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004). [CrossRef]   [PubMed]  

20. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]   [PubMed]  

21. S. T. Barnard and M. A. Fischler, “Stereo vision,” in Encyclopedia of Artificial Intelligence, pp. 1083–1090, New York: John Wiley (1987).

22. M. Yachida, Y. Kitamura, and M. Kimachi, “Trinocular vision: New approach for correspondence problem,” Proc. ICPR, 1041–1044 (1986).

23. M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Patt. Anal. Machine Intell. 15, 353–363 (1993). [CrossRef]  

24. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]  

25. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in Stereoscopic Displays and Virtual Reality Systems VIII, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE4297, 187–195 (2001).

26. R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Processing 5, 996–1011 (1996). [CrossRef]  

27. R. C. Hardie, K. J. Barnard, and E. E. Armstrong “Joint MAP registration and high-resolution image estimation using a sequence of undersampled images,” IEEE Trans. Image Processing 6, 1621–1633 (1997). [CrossRef]  

28. J.-H. Park, Y. Kim, and B. Lee, “Elemental image generation based on integral imaging with enhanced resolution,” in Information Optics and Photonics Technology, G. Mu, F. T. S. Yu, and S. Jutamulia, eds., Proc. SPIE5642, paper 5642-24 (2004).

29. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003). [CrossRef]   [PubMed]  

30. S. B. Pollard, J. E. W. Mayhew, and J. P. Frisby, “Pmf: A stereo correspondence algorithm using a disparity gradient limit,” Perception 14, 449–470 (1985). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).
  2. S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A. 18, 1814–1821 (2001).
    [Crossref]
  3. B. Lee, S. Jung, and J. -H. Park, “Viewing-angle-enhanced integral imaging using lens switching,” Opt. Lett. 27, 818–820 (2002).
    [Crossref]
  4. T. Naemura, T. Yoshida, and H. Harashima, “3-D computer graphics based on integral photography,” Opt. Express 8, 255–262 (2001).
    [Crossref] [PubMed]
  5. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40, 5592–5599 (2001).
    [Crossref]
  6. H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-927.
    [Crossref] [PubMed]
  7. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11, 1346–1356 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-12-1346.
    [Crossref] [PubMed]
  8. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11, 1862–1875 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-16-1862.
    [Crossref] [PubMed]
  9. S.-H. Shin and B. Javidi, “Speckle reduced three-dimensional volume holographic display using integral imaging,” Appl. Opt. 41, 2644–2649 (2002).
    [Crossref] [PubMed]
  10. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001).
    [Crossref]
  11. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002).
    [Crossref] [PubMed]
  12. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-26-3528.
    [Crossref] [PubMed]
  13. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-3-483.
    [Crossref] [PubMed]
  14. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5234–5241 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-21-5237.
    [Crossref]
  15. T. Okoshi, “Optimum design and depth resolution of lens-sheet and projection-type three-dimensional displays,” Appl. Opt. 10, 2284–2291 (1971).
    [Crossref] [PubMed]
  16. B. Lee, J.-H. Park, and H. Choi, “Scaling of three-dimensional integral imaging,” in Optical Information Systems, B. Javidi and D. Psaltis, eds., Proc. SPIE5202, 60–67 (2003).
  17. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “A new stereovision scheme using a camera and a lens array,” in Algorithms and Systems for Optical Information Processing V, B. Javidi and D. Psaltis, eds., Proc. SPIE4471, 73–80 (2001).
    [Crossref]
  18. C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in Stereoscopic Displays and Virtual Reality Systems IX, A. J. Woods, J. O. Merritt, S. A. Benton, and M. T. Bolas, eds., Proc. SPIE4660, 135–145 (2002).
  19. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004).
    [Crossref] [PubMed]
  20. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997).
    [Crossref] [PubMed]
  21. S. T. Barnard and M. A. Fischler, “Stereo vision,” in Encyclopedia of Artificial Intelligence, pp. 1083–1090, New York: John Wiley (1987).
  22. M. Yachida, Y. Kitamura, and M. Kimachi, “Trinocular vision: New approach for correspondence problem,” Proc. ICPR, 1041–1044 (1986).
  23. M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Patt. Anal. Machine Intell. 15, 353–363 (1993).
    [Crossref]
  24. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998).
    [Crossref]
  25. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in Stereoscopic Displays and Virtual Reality Systems VIII, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE4297, 187–195 (2001).
  26. R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Processing 5, 996–1011 (1996).
    [Crossref]
  27. R. C. Hardie, K. J. Barnard, and E. E. Armstrong “Joint MAP registration and high-resolution image estimation using a sequence of undersampled images,” IEEE Trans. Image Processing 6, 1621–1633 (1997).
    [Crossref]
  28. J.-H. Park, Y. Kim, and B. Lee, “Elemental image generation based on integral imaging with enhanced resolution,” in Information Optics and Photonics Technology, G. Mu, F. T. S. Yu, and S. Jutamulia, eds., Proc. SPIE5642, paper 5642-24 (2004).
  29. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003).
    [Crossref] [PubMed]
  30. S. B. Pollard, J. E. W. Mayhew, and J. P. Frisby, “Pmf: A stereo correspondence algorithm using a disparity gradient limit,” Perception 14, 449–470 (1985).
    [Crossref] [PubMed]

2004 (3)

2003 (5)

2002 (3)

2001 (4)

1998 (1)

1997 (2)

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997).
[Crossref] [PubMed]

R. C. Hardie, K. J. Barnard, and E. E. Armstrong “Joint MAP registration and high-resolution image estimation using a sequence of undersampled images,” IEEE Trans. Image Processing 6, 1621–1633 (1997).
[Crossref]

1996 (1)

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Processing 5, 996–1011 (1996).
[Crossref]

1993 (1)

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Patt. Anal. Machine Intell. 15, 353–363 (1993).
[Crossref]

1985 (1)

S. B. Pollard, J. E. W. Mayhew, and J. P. Frisby, “Pmf: A stereo correspondence algorithm using a disparity gradient limit,” Perception 14, 449–470 (1985).
[Crossref] [PubMed]

1971 (1)

1908 (1)

G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

Aggoun, A.

S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A. 18, 1814–1821 (2001).
[Crossref]

C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in Stereoscopic Displays and Virtual Reality Systems IX, A. J. Woods, J. O. Merritt, S. A. Benton, and M. T. Bolas, eds., Proc. SPIE4660, 135–145 (2002).

Arai, J.

Arimoto, H.

Armstrong, E. E.

R. C. Hardie, K. J. Barnard, and E. E. Armstrong “Joint MAP registration and high-resolution image estimation using a sequence of undersampled images,” IEEE Trans. Image Processing 6, 1621–1633 (1997).
[Crossref]

Barnard, K. J.

R. C. Hardie, K. J. Barnard, and E. E. Armstrong “Joint MAP registration and high-resolution image estimation using a sequence of undersampled images,” IEEE Trans. Image Processing 6, 1621–1633 (1997).
[Crossref]

Barnard, S. T.

S. T. Barnard and M. A. Fischler, “Stereo vision,” in Encyclopedia of Artificial Intelligence, pp. 1083–1090, New York: John Wiley (1987).

Choi, H.

Davies, N.

S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A. 18, 1814–1821 (2001).
[Crossref]

Erdmann, L.

Fischler, M. A.

S. T. Barnard and M. A. Fischler, “Stereo vision,” in Encyclopedia of Artificial Intelligence, pp. 1083–1090, New York: John Wiley (1987).

Frauel, Y.

Frisby, J. P.

S. B. Pollard, J. E. W. Mayhew, and J. P. Frisby, “Pmf: A stereo correspondence algorithm using a disparity gradient limit,” Perception 14, 449–470 (1985).
[Crossref] [PubMed]

Gabriel, K. J.

Harashima, H.

Hardie, R. C.

R. C. Hardie, K. J. Barnard, and E. E. Armstrong “Joint MAP registration and high-resolution image estimation using a sequence of undersampled images,” IEEE Trans. Image Processing 6, 1621–1633 (1997).
[Crossref]

Hong, S.-H.

Hoshino, H.

Jang, J.-S.

Javidi, B.

Jung, S.

J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004).
[Crossref] [PubMed]

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11, 1862–1875 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-16-1862.
[Crossref] [PubMed]

S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11, 1346–1356 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-12-1346.
[Crossref] [PubMed]

H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-927.
[Crossref] [PubMed]

B. Lee, S. Jung, and J. -H. Park, “Viewing-angle-enhanced integral imaging using lens switching,” Opt. Lett. 27, 818–820 (2002).
[Crossref]

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “A new stereovision scheme using a camera and a lens array,” in Algorithms and Systems for Optical Information Processing V, B. Javidi and D. Psaltis, eds., Proc. SPIE4471, 73–80 (2001).
[Crossref]

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in Stereoscopic Displays and Virtual Reality Systems VIII, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE4297, 187–195 (2001).

Kanade, T.

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Patt. Anal. Machine Intell. 15, 353–363 (1993).
[Crossref]

Kim, Y.

J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004).
[Crossref] [PubMed]

J.-H. Park, Y. Kim, and B. Lee, “Elemental image generation based on integral imaging with enhanced resolution,” in Information Optics and Photonics Technology, G. Mu, F. T. S. Yu, and S. Jutamulia, eds., Proc. SPIE5642, paper 5642-24 (2004).

Kimachi, M.

M. Yachida, Y. Kitamura, and M. Kimachi, “Trinocular vision: New approach for correspondence problem,” Proc. ICPR, 1041–1044 (1986).

Kishk, S.

Kitamura, Y.

M. Yachida, Y. Kitamura, and M. Kimachi, “Trinocular vision: New approach for correspondence problem,” Proc. ICPR, 1041–1044 (1986).

Kung, S. Y.

S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A. 18, 1814–1821 (2001).
[Crossref]

Kung, S.Y.

C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in Stereoscopic Displays and Virtual Reality Systems IX, A. J. Woods, J. O. Merritt, S. A. Benton, and M. T. Bolas, eds., Proc. SPIE4660, 135–145 (2002).

Lee, B.

J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004).
[Crossref] [PubMed]

H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-927.
[Crossref] [PubMed]

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11, 1862–1875 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-16-1862.
[Crossref] [PubMed]

S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11, 1346–1356 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-12-1346.
[Crossref] [PubMed]

B. Lee, S. Jung, and J. -H. Park, “Viewing-angle-enhanced integral imaging using lens switching,” Opt. Lett. 27, 818–820 (2002).
[Crossref]

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “A new stereovision scheme using a camera and a lens array,” in Algorithms and Systems for Optical Information Processing V, B. Javidi and D. Psaltis, eds., Proc. SPIE4471, 73–80 (2001).
[Crossref]

B. Lee, J.-H. Park, and H. Choi, “Scaling of three-dimensional integral imaging,” in Optical Information Systems, B. Javidi and D. Psaltis, eds., Proc. SPIE5202, 60–67 (2003).

J.-H. Park, Y. Kim, and B. Lee, “Elemental image generation based on integral imaging with enhanced resolution,” in Information Optics and Photonics Technology, G. Mu, F. T. S. Yu, and S. Jutamulia, eds., Proc. SPIE5642, paper 5642-24 (2004).

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in Stereoscopic Displays and Virtual Reality Systems VIII, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE4297, 187–195 (2001).

Lippmann, G.

G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

Manolache, S.

S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A. 18, 1814–1821 (2001).
[Crossref]

Martínez-Corral, M.

R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5234–5241 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-21-5237.
[Crossref]

Martínez-Cuenca, R.

R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5234–5241 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-21-5237.
[Crossref]

Mayhew, J. E. W.

S. B. Pollard, J. E. W. Mayhew, and J. P. Frisby, “Pmf: A stereo correspondence algorithm using a disparity gradient limit,” Perception 14, 449–470 (1985).
[Crossref] [PubMed]

McCormick, M.

S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A. 18, 1814–1821 (2001).
[Crossref]

C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in Stereoscopic Displays and Virtual Reality Systems IX, A. J. Woods, J. O. Merritt, S. A. Benton, and M. T. Bolas, eds., Proc. SPIE4660, 135–145 (2002).

Min, S.-W.

H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-927.
[Crossref] [PubMed]

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “A new stereovision scheme using a camera and a lens array,” in Algorithms and Systems for Optical Information Processing V, B. Javidi and D. Psaltis, eds., Proc. SPIE4471, 73–80 (2001).
[Crossref]

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in Stereoscopic Displays and Virtual Reality Systems VIII, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE4297, 187–195 (2001).

Naemura, T.

Okano, F.

Okoshi, T.

Okutomi, M.

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Patt. Anal. Machine Intell. 15, 353–363 (1993).
[Crossref]

Park, J. -H.

Park, J.-H.

J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004).
[Crossref] [PubMed]

H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-927.
[Crossref] [PubMed]

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11, 1862–1875 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-16-1862.
[Crossref] [PubMed]

S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11, 1346–1356 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-12-1346.
[Crossref] [PubMed]

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “A new stereovision scheme using a camera and a lens array,” in Algorithms and Systems for Optical Information Processing V, B. Javidi and D. Psaltis, eds., Proc. SPIE4471, 73–80 (2001).
[Crossref]

B. Lee, J.-H. Park, and H. Choi, “Scaling of three-dimensional integral imaging,” in Optical Information Systems, B. Javidi and D. Psaltis, eds., Proc. SPIE5202, 60–67 (2003).

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in Stereoscopic Displays and Virtual Reality Systems VIII, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE4297, 187–195 (2001).

J.-H. Park, Y. Kim, and B. Lee, “Elemental image generation based on integral imaging with enhanced resolution,” in Information Optics and Photonics Technology, G. Mu, F. T. S. Yu, and S. Jutamulia, eds., Proc. SPIE5642, paper 5642-24 (2004).

Pollard, S. B.

S. B. Pollard, J. E. W. Mayhew, and J. P. Frisby, “Pmf: A stereo correspondence algorithm using a disparity gradient limit,” Perception 14, 449–470 (1985).
[Crossref] [PubMed]

Saavedra, G.

R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5234–5241 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-21-5237.
[Crossref]

Schultz, R. R.

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Processing 5, 996–1011 (1996).
[Crossref]

Shin, S.-H.

Stern, A.

Stevenson, R. L.

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Processing 5, 996–1011 (1996).
[Crossref]

Wu, C.

C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in Stereoscopic Displays and Virtual Reality Systems IX, A. J. Woods, J. O. Merritt, S. A. Benton, and M. T. Bolas, eds., Proc. SPIE4660, 135–145 (2002).

Yachida, M.

M. Yachida, Y. Kitamura, and M. Kimachi, “Trinocular vision: New approach for correspondence problem,” Proc. ICPR, 1041–1044 (1986).

Yoshida, T.

Yuyama, I.

Appl. Opt. (8)

L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40, 5592–5599 (2001).
[Crossref]

S.-H. Shin and B. Javidi, “Speckle reduced three-dimensional volume holographic display using integral imaging,” Appl. Opt. 41, 2644–2649 (2002).
[Crossref] [PubMed]

Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002).
[Crossref] [PubMed]

T. Okoshi, “Optimum design and depth resolution of lens-sheet and projection-type three-dimensional displays,” Appl. Opt. 10, 2284–2291 (1971).
[Crossref] [PubMed]

J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004).
[Crossref] [PubMed]

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997).
[Crossref] [PubMed]

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998).
[Crossref]

A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003).
[Crossref] [PubMed]

C. R. Acad. Sci. (1)

G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

IEEE Trans. Image Processing (2)

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Processing 5, 996–1011 (1996).
[Crossref]

R. C. Hardie, K. J. Barnard, and E. E. Armstrong “Joint MAP registration and high-resolution image estimation using a sequence of undersampled images,” IEEE Trans. Image Processing 6, 1621–1633 (1997).
[Crossref]

IEEE Trans. Patt. Anal. Machine Intell. (1)

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Patt. Anal. Machine Intell. 15, 353–363 (1993).
[Crossref]

J. Opt. Soc. Am. A. (1)

S. Manolache, A. Aggoun, M. McCormick, N. Davies, and S. Y. Kung, “Analytical model of a three-dimensional integral image recording system that uses circular and hexagonal-based spherical surface microlenses,” J. Opt. Soc. Am. A. 18, 1814–1821 (2001).
[Crossref]

Opt. Express (7)

H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-927.
[Crossref] [PubMed]

S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 11, 1346–1356 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-12-1346.
[Crossref] [PubMed]

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11, 1862–1875 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-16-1862.
[Crossref] [PubMed]

S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-26-3528.
[Crossref] [PubMed]

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-3-483.
[Crossref] [PubMed]

R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5234–5241 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-21-5237.
[Crossref]

T. Naemura, T. Yoshida, and H. Harashima, “3-D computer graphics based on integral photography,” Opt. Express 8, 255–262 (2001).
[Crossref] [PubMed]

Opt. Lett. (2)

Perception (1)

S. B. Pollard, J. E. W. Mayhew, and J. P. Frisby, “Pmf: A stereo correspondence algorithm using a disparity gradient limit,” Perception 14, 449–470 (1985).
[Crossref] [PubMed]

Other (7)

J.-H. Park, Y. Kim, and B. Lee, “Elemental image generation based on integral imaging with enhanced resolution,” in Information Optics and Photonics Technology, G. Mu, F. T. S. Yu, and S. Jutamulia, eds., Proc. SPIE5642, paper 5642-24 (2004).

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in Stereoscopic Displays and Virtual Reality Systems VIII, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, eds., Proc. SPIE4297, 187–195 (2001).

S. T. Barnard and M. A. Fischler, “Stereo vision,” in Encyclopedia of Artificial Intelligence, pp. 1083–1090, New York: John Wiley (1987).

M. Yachida, Y. Kitamura, and M. Kimachi, “Trinocular vision: New approach for correspondence problem,” Proc. ICPR, 1041–1044 (1986).

B. Lee, J.-H. Park, and H. Choi, “Scaling of three-dimensional integral imaging,” in Optical Information Systems, B. Javidi and D. Psaltis, eds., Proc. SPIE5202, 60–67 (2003).

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “A new stereovision scheme using a camera and a lens array,” in Algorithms and Systems for Optical Information Processing V, B. Javidi and D. Psaltis, eds., Proc. SPIE4471, 73–80 (2001).
[Crossref]

C. Wu, A. Aggoun, M. McCormick, and S.Y. Kung, “Depth extraction from unidirectional image using a modified multi-baseline technique,” in Stereoscopic Displays and Virtual Reality Systems IX, A. J. Woods, J. O. Merritt, S. A. Benton, and M. T. Bolas, eds., Proc. SPIE4660, 135–145 (2002).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Concept of integral imaging and lens array mismatch (a) when the same lens arrays are used in the pickup and display (b) when different lens arrays are used in the pickup and display
Fig. 2.
Fig. 2. Concept of the proposed method
Fig. 3.
Fig. 3. Geometry of the integral imaging pickup part
Fig. 4.
Fig. 4. Disparity of the elemental images
Fig. 5.
Fig. 5. Concept of the pseudoscopic image problem
Fig. 6.
Fig. 6. Conventional method for overcoming the pseudoscopic image problem (a) simplified pickup system (b) display as a virtual image
Fig. 7.
Fig. 7. The portion of the elemental images captured by CCD
Fig. 8.
Fig. 8. Detected disparity map (a) initial disparity map (b) regulated disparity map
Fig. 9.
Fig. 9. 3D information obtained (a) x position (b) y position (c) z position
Fig. 10.
Fig. 10. Examples of the generated elemental images
Fig. 11.
Fig. 11. Integrated image with depth inversion (a) integrated image (b) diffused image by a diffuser at 9.5 cm (c) diffused image by a diffuser at 13 cm
Fig. 12.
Fig. 12. Integrated image with transverse magnification (a) diffused image by a diffuser at 9.5 cm (b) diffused image by a diffuser at 13 cm
Fig. 13.
Fig. 13. Integrated image observed from different directions

Tables (1)

Tables Icon

Table 1. Specifications of the experimental setup

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

y c , q = f a f c ( q φ y ) l z f c q φ l ,
d q 1 , q 2 = y c , q 1 y c , q 2 = ( q 2 q 1 ) f c f c φ l ( 1 z + 1 f a ) = ( q 2 q 1 ) d ,
ρ 2 tan 1 ( φ l 2 ( l + f a ) f a ) .
y f , q = g p ( q φ y ) ( z c + b ) + q φ ,
y f , q = g p ( q φ y ) ( z c + b ) + q φ .
y i , q = q φ ( 1 g p ( z c + b ) g d ( z c + b ) ) + y g p ( z c + b ) g d ( z c + b ) ,
1 z c ( z c + f a ) ( z c + b ) z c ( z c f a ) ( z c + b ) = 0
z c = z c 2 f a ,
b = z c 2 f a z c b ,
y i , q = y .

Metrics