Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hologram synthesis of three-dimensional real objects using portable integral imaging camera

Open Access Open Access

Abstract

We propose a portable hologram capture system based on integral imaging. An integral imaging camera with an integrated micro lens array captures spatio-angular light ray distribution of the three-dimensional scene under incoherent illumination. The captured light ray distribution is then processed to synthesize corresponding hologram. Experimental results show that the synthesized hologram is optically reconstructed successfully, demonstrating accommodation and motion parallax of the reconstructed three-dimensional scene.

© 2013 Optical Society of America

1. Introduction

Hologram contains three-dimensional (3D) information of object scene. Optical reconstruction of the hologram presents natural 3D imagery with all human depth cues, making it attractive for 3D display technique. Optical capturing of the hologram, however, is not sufficiently practical yet. Traditional capture technique of the hologram is based on a coherent interferometric optical system, which requires well-controlled laboratory environment free from external light and vibration. Laser illumination on the object also limits the maximum size and distance of the object that can be captured. Incoherent capture techniques of the interference pattern which do not use laser have been proposed, but precise alignment of the optical components including spatial light modulator (SLM) is still required [1].

View based incoherent hologram capture techniques alleviate these limitations [24]. Instead of the interference pattern, multiple perspective images or spatio-angular light ray distribution of the 3D scene are captured under regular incoherent illumination. The captured information is then processed to synthesize the hologram. Since coherent illumination and interferometric optics are not required, the system is robust against external vibrations or misalignment. Outdoor capturing is also possible. However, the system configuration is not highly compact. An array of cameras is usually used to capture multiple views, which makes overall system bulky. A more compact system consisting of a single camera and an external lens has been reported to capture spatio-angular light ray distribution and hence synthesize corresponding hologram [57]. However, this system still requires alignment between the external lens array and the camera and it is not portable. A portable integral imaging camera, or also called a plenoptic camera or a light field camera, which locates a micro lens array inside the camera body was developed to perform numerical refocusing after scene capture [8], but the hologram synthesis of the 3D scene and optical reconstruction have not been reported.

In this paper, we propose a hologram synthesis method of the real-existing 3D scene using the portable integral imaging camera. A micro lens array is integrated inside usual digital single lens reflex (DSLR) camera in front of its image sensor, making the system highly compact and portable. The proposed system captures spatio-angular light ray distribution of the 3D object scene, extracts various views, and finally synthesizes the hologram. In the followings, we explain the system configuration and the hologram synthesis algorithm along with the experimental verifications.

2. System configuration

Figure 1 shows a schematic configuration of the proposed camera system. A micro lens array which consists of the identical elemental lenses is placed in front of the image sensor like charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS). The distance between the micro lens array and the image sensor is set to be the focal length of the lens array. The main lens of the camera forms an intermediate image of the 3D scene around the micro lens array. The light from the intermediate image is then captured by the micro lens array to form an image array, which is also called a set of elemental images, on the image sensor. In order to prevent overlapping between the neighboring elemental images on the image sensor, the image-side f-number (f/#) of the main lens is matched to that of the micro lens array, i.e. φm/lm = φa/fa where φm and φa are the aperture size of the main lens and the micro lens array, respectively, lm is the distance between the main lens and the micro lens array, and fa is the focal length of the lens array.

 figure: Fig. 1

Fig. 1 Optical configuration of the proposed camera.

Download Full Size | PDF

3. Hologram synthesis

The proposed method synthesizes the hologram of the captured 3D scene using two step processing; sub-image array synthesis and its Fourier transform with random phase distribution assignment. Figure 2 shows the sub-image array synthesis step. In the optical configuration of the proposed system, each pixel in the image sensor captures a light ray bundle of a specific angle which is determined by the local position of the pixel with respect to the optic axis of the corresponding elemental lens. By collecting the pixels at the same local positions as shown in Fig. 2, an array of the sub-images of the 3D scene is synthesized [9]. Since a single pixel is extracted from each elemental image, the pixel count of each sub-image is given by the number of the elemental lenses in the array. The number of the sub-images is given by the pixel count of the image under each lens in the array. After the generation, the order of the sub-images in the array is reversed. Or equivalently, each elemental image is rotated by 180°, before the sub-image array synthesis. This inverts the depth of the 3D images, eliminating pseudoscopic image problem in the final optical reconstruction [10].

 figure: Fig. 2

Fig. 2 Sub-image array synthesis.

Download Full Size | PDF

Note that each sub-image generated in the proposed system represents the parallel ray bundle in the image space of the 3D scene. Therefore, the sub-image contains the orthographic view of the intermediate image which is already de-magnified axially and laterally by the imaging of the main lens [11]. The final hologram which is synthesized from the sub-images also reconstructs the intermediate image of the 3D scene.

The second step is the hologram synthesis from the sub-images. In the proposed method, the Fourier-holographic stereogram is synthesized. Figure 3 illustrates the hologram synthesis process. Each sub-image is first multiplied with a random phase mask, and then Fourier-transformed to form a hologram patch. These hologram patches are stitched to form the final hologram. When Nx × Ny sub-images of Mx × My pixel count are prepared by the first step, each Fourier transformed hologram patch has Mx × My pixel count and the final hologram becomes Mx × Nx × My × Ny pixel count by the stitching.

 figure: Fig. 3

Fig. 3 Fourier hologram synthesis.

Download Full Size | PDF

The random phase mask has the same pixel count as the sub-image. Each pixel in the random phase mask has a random phase value in the 2π range. Note that, without the random phase mask, the Fourier transform of the sub-image will have a strong peak around the center of the corresponding hologram patch. The random phase mask multiplied to the sub-image removes this strong peak and distributes the Fourier transform evenly, making more efficient use of the hologram patch area [12].

4. Experiment

4.1 Experimental setup

In experiment, a micro lens array of 95 × 95 elemental lenses was attached on the image sensor of a mirror-less DSLR camera. The pitches of the elemental lens and the camera pixel are 125 um and 5.1 um, respectively, resulting that each elemental lens forms an elemental image of approximately 25 × 25 pixel count. The gap between the micro lens array and the image sensor was adjusted to be the focal length of the lens, i.e. 2.4 mm. The specifications of the camera and the micro lens array used in the experiment are listed in Table 1. Figure 4 shows a picture of the implemented camera with the micro lens array.

Tables Icon

Table 1. Experimental Setup

 figure: Fig. 4

Fig. 4 Implemented camera with the micro lens array

Download Full Size | PDF

4.2 Synthetic aperture technique

In our experimental setup, the f/# of the micro lens array was too large. It was 2.4/0.125 = 19.2, which is out of the adjustable range of the camera main lens. In order to match the f/# of the camera main lens to that of the micro lens array, an additional aperture was placed in front of the main lens. In the experiment, the optimum aperture diameter was found to be 2.5 mm by observing the gap between neighboring elemental images with various aperture diameters. Another problem of the large f/# of the micro lens array is that the angular field of view (FOV) of each elemental lens is not sufficient to show depth related effect like motion parallax. In order to reduce the effective f/# of the micro lens array, we used a synthetic aperture technique in our experiment.

Figure 5 shows the concept of the synthetic aperture technique. For a fixed object scene, multiple sets of the elemental images are captured at different aperture positions, and then combined to yield a single set of the elemental images with a larger FOV or reduced effective f/#. In order to obtain seamless combination, the shift step of the aperture was set to the same value as the aperture diameter 2.5 mm in the experiment.

 figure: Fig. 5

Fig. 5 Concept of synthetic aperture technique.

Download Full Size | PDF

4.3 Experimental result

Figure 6 shows the experimental setup. In front of the micro-lens-array-implemented camera, an aperture of 2.5 mm diameter was located to match the f/#. For a 3D object scene, two objects, ‘bear’ and ‘INHA’ were prepared at 33 cm, and 146 cm from the camera. The lateral size is 4cm × 3.5cm for the ‘bear’ and 26.5cm × 19cm for the ‘INHA’ object. For the synthetic aperture technique, 5 × 5 sets of elemental images were captured with appropriate lateral shifts of the aperture.

 figure: Fig. 6

Fig. 6 Experimental setup (a) Integral imaging camera with the aperture, (b) 3D scene.

Download Full Size | PDF

Figure 7(a) shows one example of the image captured with a single exposure and a fixed aperture position. In the magnified portion of Fig. 7(a), each circular image (yellow box of solid line) corresponds to each elemental lens in the array and it has 25 × 25 pixel count. Out of total 95 × 95 elemental images, only central 94 × 67 elemental images were cropped and used in the subsequent processing to cut black background region. Figure 7(b) shows synthetic set of the elemental images generated from 5 × 5 captured images. In the synthetic image shown in Fig. 7(b), the collection of 5 × 5 circular images (yellow box of dotted line) corresponds to each elemental lens, providing 5 times enhanced angular FOV for each lens.

 figure: Fig. 7

Fig. 7 Captured images (a) Single capture (94 × 67 lens images of 25 × 25 pixel count), (b) Synthetic aperture with 5 × 5 captures (94 × 67 lens images of 125 × 125 pixel count).

Download Full Size | PDF

Figure 8(a) shows the sub-images generated from the synthetic image shown in Fig. 7(b). 125 × 125 sub-images were synthesized with 94 × 67 pixel count per each. In our experiment, we did not use all 125 × 125 sub-images. Only 20 × 16 sub-images distributed regularly across the full 125 × 125 array were used in the hologram synthesis, giving (20 × 94) × (16 × 67) = 1880 × 1072 pixel count for the synthesized hologram. Figure 8(b) shows the selected 20 × 16 sub-images. Brightness fluctuation observed in the selected sub-image array of Fig. 8(b) originates from the cosine fourth power brightness falloff, or also called natural vignetting, of individual lens in the lens array.

 figure: Fig. 8

Fig. 8 Sub-images synthesized using Fig. 7(b). (a) All 125 × 125 sub-images of 94 × 67 pixel count. Among these, only 20 × 16 sub-images (represented in yellow box) selected regularly across whole array were used in the hologram synthesis, (b) Selected 20 × 16 sub-images.

Download Full Size | PDF

Note that the selection of the sub-images needs to be made with a uniform interval (5 sub-image interval along horizontal and vertical directions in our experiment as indicated by yellow boxes in Fig. 8(a)) to ensure constant parallax change in the reconstructed 3D image. Also note that the selection of the sub-images is not unique in the proposed method. Any selection can be used for the hologram synthesis as long as the interval between the selected sub-images is uniform. Figure 9 shows the phase distribution of the final hologram synthesized from the selected 20 × 16 sub-images.

 figure: Fig. 9

Fig. 9 Phase distribution of synthesized hologram (1880 × 1072 pixel count).

Download Full Size | PDF

The synthesized hologram was verified by optical reconstruction. In the optical reconstruction, a laser of 532 nm wavelength and a phase only SLM of 1920 × 1080 pixel count were used with a Fourier transform lens. Since the SLM modulates phase of the light, only the phase part of the synthesized hologram was used in the reconstruction. Figure 10 shows the optical reconstruction result. The lateral size of the optically reconstructed image was about 2cm × 2cm.

 figure: Fig. 10

Fig. 10 Optical reconstruction (a) Accommodation (Media 1), (b) Motion parallax (Media 2).

Download Full Size | PDF

Figure 10(a) shows the optical reconstruction result captured at different focal planes. It can be observed that ‘bear’ object and ‘INHA’ object are focused at different axial distance from the camera, exhibiting the 3D nature of the reconstruction. Figure 10(b) shows the motion parallax. The images are captured at 9 different angles. When we captured Fig. 10(b), the depth of focus of the camera was intentionally increased to capture both object images with less blur, which degrades the captured quality. Nevertheless, the relative shift between the ‘bear’ and ‘INHA’ object images is clearly observed, confirming successful reconstruction of the 3D scene.

Note that the synthetic aperture technique is not an essential part of the proposed method. In our experiment, the synthetic aperture technique was used only to compensate too large f/# ( = 19.2) of the implemented micro lens array, not to increase the number of the sub-images or the number of the pixels for each elemental image. Even though we obtained (5 × 25) × (5 × 25) = 125 × 125 sub-images by the synthetic aperture technique with 5 × 5 captures, only 20 × 16 sub-images were actually used in the hologram synthesis. The successful optical reconstruction result shown in Fig. 10 indicates that if the f/# of the micro lens array were around 19.2/5≅4 or less in our experimental condition, only single capture without synthetic aperture technique would be sufficient to produce the similar result. An additional aperture used in our experiment is also eliminated in that case, leaving only a portable single DSLR camera with an integrated micro lens array in the system. Consequently, the successful optical reconstruction result shown in Fig. 10 supports the feasibility of the proposed portable hologram camera system based on integral imaging.

5. Conclusion

In this paper, we propose a portable integral imaging camera for synthesizing hologram of the real-existing 3D scene. The four-dimensional spatio-angular light ray distribution of the 3D scene is captured under regular incoherent illumination by the micro lens array implemented on the image sensor plane of usual mirror-less DSLR camera. The captured spatio-angular light ray distribution is processed to form an array of the sub-images and then used to synthesize Fourier holographic stereogram. Due to large f/# of the micro lens array available at the time of the experiment, an aperture and the synthetic aperture technique were additionally applied to capture sufficient angular range in the experiment. The experimental result shows that the hologram of the 3D scene is synthesized and optically reconstructed successfully, showing accommodation and motion parallax of the reconstructed images.

Acknowledgment

This work was partly supported by the IT R&D program of MSIP/MOTIE/KEIT[10039169, Development of Core Technologies for Digital Holographic 3-D Display and Printing System]. This work was also partly supported by INHA UNIVERSITY Research Grant. (INHA-47294)

References and links

1. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]   [PubMed]  

2. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48(34), H120–H136 (2009). [CrossRef]   [PubMed]  

3. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional fourier spectra of real existing objects,” Opt. Lett. 28(24), 2518–2520 (2003). [CrossRef]   [PubMed]  

4. Y. Rivenson, A. Stern, and J. Rosen, “Compressive multiple view projection incoherent holography,” Opt. Express 19(7), 6109–6118 (2011). [CrossRef]   [PubMed]  

5. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009). [CrossRef]   [PubMed]  

6. T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45(17), 4026–4036 (2006). [CrossRef]   [PubMed]  

7. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]   [PubMed]  

8. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).

9. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

10. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

11. H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20(2), 890–895 (2012). [CrossRef]   [PubMed]  

12. C. B. Burckhardt, “Use of a random phase mask for the recording of Fourier transform holograms of data masks,” Appl. Opt. 9(3), 695–700 (1970). [CrossRef]   [PubMed]  

Supplementary Material (2)

Media 1: AVI (3942 KB)     
Media 2: AVI (8181 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Optical configuration of the proposed camera.
Fig. 2
Fig. 2 Sub-image array synthesis.
Fig. 3
Fig. 3 Fourier hologram synthesis.
Fig. 4
Fig. 4 Implemented camera with the micro lens array
Fig. 5
Fig. 5 Concept of synthetic aperture technique.
Fig. 6
Fig. 6 Experimental setup (a) Integral imaging camera with the aperture, (b) 3D scene.
Fig. 7
Fig. 7 Captured images (a) Single capture (94 × 67 lens images of 25 × 25 pixel count), (b) Synthetic aperture with 5 × 5 captures (94 × 67 lens images of 125 × 125 pixel count).
Fig. 8
Fig. 8 Sub-images synthesized using Fig. 7(b). (a) All 125 × 125 sub-images of 94 × 67 pixel count. Among these, only 20 × 16 sub-images (represented in yellow box) selected regularly across whole array were used in the hologram synthesis, (b) Selected 20 × 16 sub-images.
Fig. 9
Fig. 9 Phase distribution of synthesized hologram (1880 × 1072 pixel count).
Fig. 10
Fig. 10 Optical reconstruction (a) Accommodation (Media 1), (b) Motion parallax (Media 2).

Tables (1)

Tables Icon

Table 1 Experimental Setup

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.