Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Integral imaging-based large-scale full-color 3-D display of holographic data by using a commercial LCD panel

Open Access Open Access

Abstract

We propose a new type of integral imaging-based large-scale full-color three-dimensional (3-D) display of holographic data based on direct ray-optical conversion of holographic data into elemental images (EIs). In the proposed system, a 3-D scene is modeled as a collection of depth-sliced object images (DOIs), and three-color hologram patterns for that scene are generated by interfering each color DOI with a reference beam, and summing them all based on Fresnel convolution integrals. From these hologram patterns, full-color DOIs are reconstructed, and converted into EIs using a ray mapping-based direct pickup process. These EIs are then optically reconstructed to be a full-color 3-D scene with perspectives on the depth-priority integral imaging (DPII)-based 3-D display system employing a large-scale LCD panel. Experiments with a test video confirm the feasibility of the proposed system in the practical application fields of large-scale holographic 3-D displays.

© 2016 Optical Society of America

1. Introduction

Thus far, the holographic method has been regarded as one of the most attractive approaches for the lifelike three-dimensional (3-D) display because it can create the most authentic illusion of observing 3-D scenes without any uses of special glasses [1]. Moreover, this holographic 3-D display has been also considered as an alternative way to the current stereoscopic display having serious drawbacks of eye fatigue and visual discomfort [2–4]. This holographic method, however, suffers from several practical problems, which include the unavailability of a full-color holographic 3-D camera system for the live capturing of daylight-illuminated outdoor scenes [5], computational complexity in the real-time generation of electro-holographic data for 3-D scenes [6,7], as well as the lack of a large-scale high-resolution spatial light modulator (SLM) for displaying holographic data since the resolution of a hologram pattern is in the order of the light wavelength [8]. Furthermore, the holographic 3-D display system becomes so complex for full-color display because it must employ a time- or spatial-multiplexing scheme for simultaneous displaying of three-color hologram patterns [8–11]. These critical issues of the current holographic 3-D display system have prevented it from being widely accepted in the practical 3-D video communication and television broadcasting systems.

As an alternative to the holographic 3-D display, the integral imaging method has been also actively researched because it can provide full-color 3-D images with full-parallax and continuous-viewing points similarly like the holography [12–14]. Integral imaging is a passive multi-perspective imaging technique which records multiple two-dimensional (2-D) images of a 3-D scene from different perspectives by combined use of the lens array and CCD camera, so it can even capture the outdoor 3-D scene under daylight illumination unlike the holographic method.

Basically, integral imaging is composed of two processes such as pickup and reconstruction. In the pickup process, the ray information emanating from a 3-D object is captured with the CCD camera through a pickup lens array, and recorded as a form of elemental image array (EIA), where each elemental image (EI) represents different perspectives of the 3-D object. From this captured EIA, a 3-D object image is reconstructed by combined use of the LCD display panel and lens array. Actually, the integral imaging-based 3-D display system has several advantages compared to the holographic method. That is, it can be realized in a much simpler optical configuration with an incoherent light source, as well as in a much larger display size using a commercial LCD panel than those of the holographic system even though it still requires more improvements in image-resolution, depth-of-focus and viewing-angle.

Since two research groups have actively studied on integral-imaging and holographic techniques for their application to the future real 3-D television (3DTV) system, both holographic and integral imaging-based 3-DTV systems expect to be on the market in a near future by taking into account of their own technical advantages [15–17]. It means that real 3-D contents delivered to the customers from the TV broadcasting system should be displayed on either holographic or integral imaging-based 3DTV systems whether they are originally generated in the forms of holographic data or integral images since the customers may optionally have one of the real 3DTV systems at home. Therefore, it must be necessary to exploit an effective scheme to make a conversion between the holographic data and integral images. Thus far, several researches on holographic data-to-integral images (H-to-I) and integral images-to-holographic data (I-to-H) conversions have been done.

For the I-to-H conversion, a couple of approaches have been proposed [18,19]. Mishina T, et al. proposed a method to calculate hologram patterns from the reconstructed elemental images (EIs) based on the Fresnel diffraction formula [18]. Park J. H. et al. also suggested another method to synthesize hologram patterns from multiple orthographic projection-view images [19]. Since 3-D data of the real-world scene can be easily picked up with the integral imaging scheme, I-to-H conversion can be also thought as an alternative way of the holographic camera for capturing outdoor 3-D scenes.

In addition, several H-to-I conversion methods have been also proposed [20–22]. Yöntem A Ö et al presented an integral imaging-based 3-D display system, where EIs are converted from the hologram data as the form of a diffraction pattern by simulating the light propagation process from the hologram plane to the reconstruction, lens array and EIA planes [20,21]. This diffraction method, however, employs SLMs for the reconstruction of converted EIs, which limits its practical application for the large-scale full-color 3-D display of holographic data.

Kim S. C. et al. also proposed another approach to convert the hologram data into the EIA using a cropping method [22]. The hologram pattern of a 3-D scene is cropped into a number of sub-holograms, and each sub-hologram is reconstructed for obtaining the different views of the 3-D scene, which is called a sub-image array (SIA). The EIA is then generated by rearranging of image pixels of this converted SIA based on a simple SIA-to-EIA transformation method [23,24]. However, since the processing time for cropping and reconstruction of the holographic data highly depends on the number of cropped sub-holograms, as well as the number of sliced-depth planes of the 3-D scene, the practical application of this method would be very limited.

For solving those problems, in this paper, a direct ray-mapping method is proposed. In the proposed method, an input 3-D scene is modeled as a set of depth-sliced object images (DOIs), and the hologram pattern for that scene is generated by interfering each of DOIs with a reference beam, and summing them all based on Fresnel convolution integrals. From this hologram pattern, a virtual 3-D scene is reconstructed by being illuminated with the conjugated reference beam. A virtual lens array is then located in front of this reconstructed 3-D scene, and EIs are directly picked up by simulating the ray mapping-based light propagation process. Thus, unlike the conventional cropping method, the proposed method does not require two intermediate processes of hologram cropping and sub-image generation for the H-to-I conversion, which results in a considerable reduction of the conversion time. In addition, the proposed method enables full-color 3-D display of the hologram data on a large LCD panel, contrasting the conventional diffraction method where three SLMs are needed for full-color display.

Basically, the integral imaging-based 3-D display system can provide a depth of 3-D object in the range of a few centimeters to meters by an appropriate combination of the LCD panel and lens array. Thus far, there have been two kinds of integral imaging-based 3-D display systems such as depth-priority integral imaging (DPII) and resolution-priority integral-imaging (RPII) systems depending on the relationship between the focal length of the lens and the gap distance of the lens array from the display panel. The DPII system can be obtained just by setting the gap distance to be equal to the focal length of the lens, and it can provide 3-D images with larger depth-of-field (DOF) through both real and virtual image fields, but with lower resolution than the RPII. On the other hand, the RPII system can be made when the gap distance is set to be not equal to the focal length of the lens, which can supply 3-D images with higher resolution, but much smaller DOF than the DPII system.

Here in this paper, a large-scale LCD-based DPII display system is implemented, and from which EIs converted from the holographic data are optically reconstructed to be a large-scale full-color 3-D image of the input scene. To confirm the feasibility of the proposed system, experiments with a test video is performed and the results are compared to those of the conventional methods.

2. Proposed method

Figure 1 shows an overall block-diagram of the proposed system, which largely consists of four processes. In the first step, three-color hologram patterns are generated by interfering each color DOI of the input 3-D scene with the reference beam. In the second step, a virtual 3-D scene is generated from the hologram pattern by being illuminated with the conjugated reference beam. In the third step, EIs of this virtual 3-D scene are computationally captured by simulating the direct ray-optical pickup process of the conventional integral imaging system. Finally, these EIs are optically reconstructed into a 3-D scene on the DPII-based 3-D display system employing a large-size LCD panel.

 figure: Fig. 1

Fig. 1 Block diagram of the proposed system composed of four processes.

Download Full Size | PDF

2.1 Generation of hologram patterns of a 3-D scene based on Fresnel diffraction

Figure 2 shows an optical configuration for generating the Fresnel off-axis holograms of an input 3-D scene, where the input scene is approximated as a collection of depth-sliced object images (DOIs) with different depth. Here, the depth pitch between the two adjacent DOIs should be smaller than the maximum depth discrimination value. In case the object image is viewed at the distance of 50cm, the maximum depth discrimination value can be calculated to be 0.15mm according to the human visual system [25]. In this paper, the depth pitch is set to be 0.05mm.

 figure: Fig. 2

Fig. 2 Optical configuration to generate the Fresnel off-axis hologram of a 3-D scene.

Download Full Size | PDF

Here the complex amplitude of the 3-D scene, which is modeled as a set of DOIs, on the hologram plane can be represented by using the following diffraction equation:

O(x,y)=z=1Nm=1Mamrmexp[j(krm+φm)]
Where N and M denote the total numbers of DOIs and object points in each DOI, and z and m represent the zth depth plane (z = 1, 2, ..., N) and the mth object point in a depth plane (m = 1, 2, …, M), respectively. In addition, the wavenumber k is defined as k = 2π/λ, where λ means the free-space wavelength of the light.

Now, the oblique distance rm between the mth object point (xm, ym, zm) and the point (x, y, 0) on the hologram is given by

rm=(xxm)2+(yym)2+zm2
By using the paraxial approximation in Fresnel diffraction, the complex amplitude of the 3-D scene on the hologram plane, representing diffraction data of the input 3-D scene, can be calculated by using the Fresnel diffraction formula as follows
O(x,y)=ejkzjλzz=1Nξ,ηu(ξ,η;z)exp{jk2z[(xξ)2+(yη)2]}
This complex amplitude of Eq. (3) can be also represented as a convolution integral form of Eq. (4), where the convolution kernel is given by Eq. (5).
O(x,y)=z=1NOz(x,y)=z=1Nu(x,y;z)hz(x,y)=z=1NF1{F[u(x,y;z)]F[hz(x,y)]}
hz(x,y)=ejkzjλzexp[jk2z(x2+y2)]
Oz(x,y) in Eq. (4) is the diffraction data on the hologram plane of the zth DOI u(ξ,η;z), F and F−1 represent Fourier transform and inverse-Fourier transform operations and they can be calculated with FFT(fast Fourier transform) and IFFT(Inverse FFT) algorithms, respectively.

In addition, the complex amplitude of a collimated reference beam R(x, y) incident on to the hologram plane with an oblique angle is represented as follows.

R(x,y)=aRexp[j(kxsinθR)]
In Eq. (6), aR and θR represents the real-valued amplitude and the incident angle of the reference beam on to the hologram plane, respectively. Thus, the hologram pattern of an input 3-D scene at the hologram plane is a summation of each sub-hologram patterns Iz(x, y) for each depth DOIs, and Iz(x, y) is obtained just by calculating the interference pattern between the object beam Oz(x, y) and the reference beam R(x, y) of Eq. (6), which is shown in Eq. (7)
I(x,y)=z=1NIz(x,y)=z=1N|Oz(x,y)+R(x,y)|2|Oz(x,y)|2|R(x,y)|2=z=1NOz(x,y)R*(x,y)+Oz*(x,y)R(x,y)
Where Iz(x, y) is the sub-hologram data of zth DOI u(ξ,η;z),|Oz(x,y)|2 and |R(x,y)|2 denote self-interference object and reference beam intensities, respectively. Since holographic information of the 3-D scene is contained only in the interference pattern between the object and reference beams, these two self-interference terms are removed from the whole interference pattern as shown in Eq. (7). In this paper, the resolution of the hologram pattern is set to be 3,835 × 2,392, and three lasers with wavelengths of 660nm(R), 532nm(G), and 472nm(B), are used for calculating the R-, G- and B-color hologram patterns for the corresponding color images of the 3-D scene.

2.2 Reconstruction of the virtual 3-D scene from three-color hologram patterns

For the direct conversion of holographic data into EIs, the 3-D scene, which is modeled as a set of DOIs, is digitally reconstructed from the hologram pattern by being illuminated with the reference beam which is the conjugated version of the reference beam used in the hologram generation process, as is shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Reconstruction of a virtual 3-D scene from three-color hologram patterns.

Download Full Size | PDF

Here the Fresnel propagation formula is used for calculating the reconstructed DOIs at each depth in front of the hologram plane. The object wave reconstructed from the hologram pattern by illuminating the conjugated reference beam on the hologram plane, Oz can be expressed by Eq. (8).

Oz(x,y)=Iz(x,y)R*(x,y)=[Oz(x,y)R*(x,y)+Oz*(x,y)R(x,y)]R*(x,y)=Oz(x,y)aR2exp[2j(kxsinθR)]+Oz*(x,y)aR2
Now, DOIs of the 3-D scene, u(x, y; z) can be obtained at each depth plane by propagating this reconstructed object wave Oz along the z-axis direction as follows.
u(x,y;z)=Oz(x,y)hz(x,y)=F1{F[Oz(x,y)]F[hz(x,y)]}
Since three kinds of hologram patterns were generated for each of the R-, G- and B-colors, R-, G- and B-color DOIs, ur(x,y;z), ug(x,y;z) and ub(x,y;z) can be reconstructed from their corresponding color hologram patterns. These reconstructed R-, G- and B-color DOIs are then synthesized at each depth plane to generate the full-color DOIs of the 3-D scene as shown in Fig. 4.

 figure: Fig. 4

Fig. 4 Three-color DOIs reconstructed from their corresponding color holograms and synthesized DOIs on four neighboring depth planes: (a) R-color DOIs, (b) G-color DOIs, (c) B-color DOIs, (d) Synthesized full-color DOIs.

Download Full Size | PDF

2.3 Generation of integral images from the reconstructed full-color DOIs

EIs can be picked up from a set of full-color DOIs with a lens array, which are reconstructed from three-color hologram patterns on each depth plane. The geometric relationship between the reconstructed DOIs, lens array and EIs is shown in Fig. 5. The distances between the reconstructed DOI on the 1st depth plane and the lens array, and between the lens array and the EIA plane are assumed to be d and g, respectively. In addition, the interval between two depth planes and the pitch of the lens array are assumed to be Δz and p, respectively.

 figure: Fig. 5

Fig. 5 Optical geometry for capturing EIs from the DOIs reconstructed from three-color hologram patterns.

Download Full Size | PDF

Figure 5 shows a direct ray-optical pickup process of EIs with a lens array in the conventional integral imaging system. As seen in Fig. 5, the reconstructed virtual 3-D scene is composed of a set of DOIs with different depth, and each lens has its own pickup range of the DOIs depending on the distance between the DOI, lens array and EIA plane. In the digital pickup process, each elemental lens on the lens-array can be considered as a camera with its own viewing area. According to the imaging regularity, picked-up EIs through each elemental lens are viewed as the inverted versions of all those summed images of corresponding viewing areas of every DOI. This pickup process includes extraction of viewing areas of each DOI, rotation and zooming of those extracted DOI images. By carrying out those processes for all full-color DOIs, full-color EIs can be finally obtained.

First, viewing areas of those DOIs through each elemental lens are extracted. As seen in Fig. 5, for the case of the EIx,y, which represents the xth and yth EI component of the EIA, the viewing area of the DOI on the zth depth plane through an elemental lens becomes a square area with a side length of Sz based on the triangular relationship between the DOI, elemental lens and picked-up EI, which can be derived from the following equation.

sz=d+(z1)Δzgp
Here, the starting left-top and ending right-bottom points of the squared viewing area are given by as follows.

(ssx,ssy)=(p[(x12)d+(z1)Δz2g],p[(y12)d+(z1)Δz2g])
(sex,sey)=(p[(x12)+d+(z1)Δz2g],p[(y12)+d+(z1)Δz2g])

Then, the EIx,y can be calculated just by summing all those viewing areas of every DOI together, and inversing and zooming them to the size of p × p pixels, which is given by Eq. (13).

EIx,y=Z(.)I(.)z=1Nuz[ssx:sex,ssy:sey]
Where Z(.) and I(.) represent the scaling and inversing processes, respectively. Thus, the EIA is finally obtained by performing those calculation processes for all EIs, which is represented by the following equation.

EIA=xyEIx,y

2.4 Large-scale full-color 3-D display of EIs based on the DPII system

The picked-up EIA from the full-color DOIs can be optically reconstructed on a depth-priority integral imaging (DPII)-based 3-D display system. Here, the DPII system is constructed by combined use of a 22.2” LCD monitor (Model: IBM T221) having a resolution of 3840 × 2400 pixels and a pixel pitch of 124.50um, as well as a 298 × 186 lens array where the focal length and pitch of an elemental lens are 8mm and 1.62mm, respectively. Figure 6 shows an optical configuration of the DPII-based 3-D display system, where a display lens array is located at the distance of 8mm from the LCD monitor since the focal length of the pickup lens is given by 8mm.

 figure: Fig. 6

Fig. 6 Optical configuration of the DPII-based 3-D display system.

Download Full Size | PDF

With this DPII system, EIs converted from the holographic data are reconstructed into a 3-D scene image with perspectives. Here, the viewing-angle of this DPII display system is calculated to be about 11° based on the optical parameters of the employed lens array [16]. Here, the viewing-angle of this system can be increased by expanding the lens pitch or decreasing the focal length of the lens. In addition, the depth of focus (DOF) of the DPII display system is also calculated to be 10.40cm ( = 13 × 8mm) since the lens pitch is 1.62mm and each lens covers 13 × 13 (1.62mm/124.50um = 13) pixels [26]. This DOF can be also increased by expanding the lens pitch or increasing the focus length of the lens. Thus, there is a tradeoff between the viewing-angle and DOF in terms of the focal length of the lens.

3. Experiments and results

As the test video, a full-color 3-D video (30 frames) is generated with the 3Ds Max. As shown in Fig. 7, the test scene is composed of a 3-D moving airplane and stationary clouds in the sky. That is, a 3-D airplane is flying upon the clouds. Here, the clouds having almost the same depth act as a background. The airplane is assumed to fly to the left-top direction from the right-bottom side with obvious differences in depth and motion between the frames

 figure: Fig. 7

Fig. 7 Intensity and depth images of the test video for the 1st, 10th, 20th and 30th frames: (a) Color intensity images, (b) Depth images, (c) Depth distribution of the airplane and cloud image points of the 1st, 10th, 20th and 30th frames.

Download Full Size | PDF

Table 1 shows system parameters used for full-color hologram generation and DPII-based 3-D display. Since the DPII-based 3-D display system employs a 22.2” IBM LCD monitor with 3,835 × 2,392 pixels, the resolution of the hologram pattern is set to be 3,835 × 2,392. Here, the pixel pitch of the LCD panel and the lens pitch of the lens array are given by 124.50um and 1.62mm, respectively, so each lens can cover 13 × 13 (1.62mm/124.50um = 13) pixels of the LCD panel. Thus, for the maximal use of the IBM LCD panel, the lens array is designed to be composed of 295 × 184 lenses, here in this paper, which allows the resolution of the hologram and EIA to be increased up to 3,835(295 × 13) × 2,392(184 × 13) pixels.

Tables Icon

Table 1. System parameters for the holographic data and DPII-based 3-D display system.

For the case of the 20th frame of Fig. 7, all background clouds are closely located around the 60th (z60) depth plane, while the airplane is ranged from the 130th (z130) to the 145th (z145) depth plane. Here the distance between two neighboring depth planes is set to be 0.05mm. Thus, the airplane has a depth range of 75mm (15 × 0.05mm), and the distance between the background clouds (z60) and the airplane (z130) becomes 3.5cm (70 × 0.05mm). The initial distance between the 1st depth plane of the 3-D airplane and the lens array is set to be 3cm, which means that the distance between the lens array and the background clouds becomes 6cm (3cm + 60 × 0.05mm) in the reconstruction process. That is, the airplane with the depth of 75mm is floated in the air at the distance of 9.50cm~10.25cm from the lens array whereas the background is floated at the distance of 6cm from the lens array, which is well matched with the calculated DOF value of 10.40cm of the DPII display system.

Figure 8 visually shows a structural diagram of the DPII-based 3-D display system and reconstructed 3-D images from it. The background clouds are floated at the distance of 6cm from the lens array, and the airplane is flying upon the clouds along the curve path with a rotation motion from the starting position of the 1st frame to the ending position of the 30th frame.

 figure: Fig. 8

Fig. 8 Structural diagram of the DPII-based 3-D display system and reconstructed 3-D scenes from EIAs for the 1st and 30th video frames.

Download Full Size | PDF

Figure 9 shows the optically fabricated DPII display system by combined use of a 22.2” IBM LCD monitor and a 295 × 184 lens array. With this optical DPII display system, 30 frames of EIAs are reconstructed into the 3-D scenes with their own perspectives. Figure 10 shows optically reconstructed 3-D scenes from the fabricated DPII display system of Fig. 9, which show 3-D images viewed from the left, center and right directions for each of the 1st, 10th, 20th and 30th frames. As seen in Fig. 10, all those 3-D scenes have been successfully reconstructed on the 22.2” LCD panel of the fabricated DPII display system, by showing different perspectives of the 3-D airplane just like the holographic images.

 figure: Fig. 9

Fig. 9 Optically fabricated DPII-based 3-D display system by combined use of an IBM LCD monitor and a 295 × 184 lens array.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Optically reconstructed 3-D scenes viewed from the left, center and right directions for each case of the (a) 1st, (b) 10th, (c) 20th, and (d) 30th frames (see Visualization 1).

Download Full Size | PDF

For easy observation of the relative perspective changes of the 3-D airplane when they are viewed from the left, center and right directions, yellow rectangles are drawn on each reconstructed 3-D scene. As seen in the 1st frame case of Fig. 10(a), the left-wing edge of the airplane, which is circled with the red color, is located inside the rectangle being somewhat away from the left-hand side rectangle in the left-view, but it almost reaches the left-hand side rectangle in the center-view, and passes over this left-hand side rectangle in the right-view. These viewing-point dependent perspective changes of the 3-D airplane confirm that 3-D scenes can be reconstructed to have their own DOFs from the fabricated DPII display system.

Moreover, even the 20th frame case of Fig. 10(c) also shows the same results as of the 1st frame case except that the airplane is rotated to the counterclockwise direction as it is flying. As seen in Fig. 10(c), in the left-view, the airplane’s tail is located inside the yellow rectangle, which is circled with the green color, but it moves a little bit away to the left in the center-view, and move further more away from the left-hand side rectangle in the right-view. These results also confirm the perspective-variant display of the 3-D airplane in the proposed system.

Thus, all those experimental results validate that the proposed system can optically reconstruct the holographic data as a form of the large-scale full-color 3-D image with perspectives on the LCD-based DPII display system based on a direct ray-optical H-to-I conversion scheme. In particular, the proposed method can also provide us several advantages when it is compared to the conventional methods. In case of the diffraction method, holographic data is regarded as the diffraction pattern of a 3-D object, so EIs are generated by simulating the diffraction process of the 3-D object passing through a lens array. Thus, these EIs can be reconstructed only by using the SLM, which means that the size of the reconstructed image is limited to that of the commercially available SLM. Here, it must be noted that the diagonal sizes of the commercially available SLMs are in the range of 1” to 2” [8,9]. Moreover, this SLM-based 3-D display system requires a laser-based complex time-or space-multiplexed setup for full-color display just like the case of the holographic 3-D display [9–11].

On the other hand, unlike the conventional method, the proposed method can directly generate the full-color EIs from the holographic data based on ray-optics, so these EIs can be simply reconstructed into a full-color 3-D image on the large-size LCD-based DPII display system employing an incoherent light source. In addition, the DPII-based 3-D display system can provide a much larger viewing-angle than that of the SLM-based 3-D display system. For the case of the DPII display system implemented in this paper, it’s viewing angle is calculated to be about 11°, whereas the viewing angle of the SLM-based 3-D display system is estimated to be about 1° or 2° even for a commercial SLM with the shortest pixel pitch of 6µm since it’s viewing angle highly depends on the pixel pitch of the SLM.

Moreover, for case of the conventional cropping method, the hologram pattern is cropped into sub-holograms and reconstructed into sub-images with different perspective for the generation of EIs. Thus, this method requires a time-consuming conversion process. That is, the cropping method needs a total calculation time of zmax*c2*tr + t(s-e) for generating the EIs from the holographic data, where zmax, c, tr, and t(s-e) represents the maximum number of sliced depth planes, the cropping number, the time to reconstruct the sub-holograms and the time to convert SIs into EIs, respectively. Contrary to this method, the proposed method doesn’t require this hologram cropping process, so it’s conversion time can be reduced down to the value of zmax*tr*te, where te represents the time to generate the EIs based on the direct ray-mapping method.

4. Conclusions

In this paper, a new type of the DPII-based large-scale full-color 3-D display of holographic data based on a ray-optical H-to-I conversion scheme has been proposed. This proposed system allows an accelerated direct conversion of the holographic data into EIs, and at the same time optical reconstruction of those EIs into large-scale full-color 3-D images with perspectives on the DPII-based 3-D display system employing a commercial LCD panel. In addition, successful experimental results confirm the feasibility of the proposed system in the practical application fields of the large-scale full-color 3-D display of the holographic data.

Acknowledgments

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (No. 2011-0030079). This work was partly supported by 'The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea [GK15D0100, Development of Telecommunications Terminal with Digital Holographic Tabletop Display]. The work reported in this paper was conducted during the sabbatical year of Kwangwoon University in 2015.

References and Links

1. C. J. Kuo and M. H. Tsai, Three-Dimensional Holographic Imaging (John Wiley, 2002).

2. Y. Nojiri, H. Yamanoue, S. Ide, S. Yano, and F. Okano, “Parallax distribution and visual comfort on stereoscopic HDTV,” in Proceedings of IBC (2006), pp. 373.

3. J. C. A. Read and I. Bohr, “User experience while viewing stereoscopic 3D television,” Ergonomics 57(8), 1140–1153 (2014). [CrossRef]   [PubMed]  

4. Y. Nojiri, H. Yamanoue, A. Hanazato, M. Emoto, and F. Okano, “Visual comfort/discomfort and visual fatigue caused by stereoscopic HDTV viewing,” in Proceedings of Electronic Imaging 2004, International Society for Optics and Photonics (ISOP, 2004), pp. 303–313.

5. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]   [PubMed]  

6. T. Senoh, K. Wakunami, Y. Ichihashi, H. Sasaki, R. Oi, and K. Yamamoto, “Multiview image and depth map coding for holographic TV system,” Opt. Eng. 53(11), 112302 (2014). [CrossRef]  

7. S. Reichelt, R. Häussler, N. Leister, G. Fütterer, H. Stolle, and A. Schwerdtner, “Holographic 3-D displays-electro-holography within the grasp of commercialization,” in Book of Advances in Lasers and Electro Optics, N. Costa and A. Cartaxo (Academic, 2010).

8. H. Sasaki, K. Yamamoto, Y. Ichihashi, and T. Senoh, “Image size scalable full-parallax coloured three-dimensional video by electronic holography,” Sci. Rep. 4, 4000 (2014). [PubMed]  

9. H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi, and T. Senoh, “Large size three-dimensional video by electronic holography using multiple spatial light modulators,” Sci. Rep. 4, 6177 (2014). [CrossRef]   [PubMed]  

10. G. Xue, J. Liu, X. Li, J. Jia, Z. Zhang, B. Hu, and Y. Wang, “Multiplexing encoding method for full-color dynamic 3D holographic display,” Opt. Express 22(15), 18473–18482 (2014). [CrossRef]   [PubMed]  

11. T. Shimobaba, T. Takahashi, N. Masuda, and T. Ito, “Numerical study of color holographic projection using space-division method,” Opt. Express 19(11), 10287–10292 (2011). [CrossRef]   [PubMed]  

12. Y. Piao and E.-S. Kim, “Resolution-enhanced reconstruction of far 3-D objects by using a direct pixel mapping method in computational curving-effective integral imaging,” Appl. Opt. 48(34), H222–H230 (2009). [CrossRef]   [PubMed]  

13. H.-H. Kang, J.-H. Lee, and E.-S. Kim, “Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging,” Opt. Express 20(5), 5440–5459 (2012). [CrossRef]   [PubMed]  

14. Y. Piao, M. Zhang, D. Shin, and H. Yoo, “Three-dimensional imaging and visualization using off-axially distributed image sensing,” Opt. Lett. 38(16), 3162–3164 (2013). [CrossRef]   [PubMed]  

15. P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Vid. 17(11), 1647–1658 (2007). [CrossRef]  

16. B. Javidi and J.-S. Jang, “Improved depth of focus, resolution, and viewing angle integral imaging for 3D TV and display,” IEEE LEOS. 2, 726–727 (2003).

17. T. C. Poon, “Three‐dimensional television using optical scanning holography,” J. Inf. Disp. 3(3), 12–16 (2002). [CrossRef]  

18. T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45(17), 4026–4036 (2006). [CrossRef]   [PubMed]  

19. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009). [CrossRef]   [PubMed]  

20. A.-Ö. Yöntem and L. Onural, “Integral imaging based 3D display of holographic data,” Opt. Express 20(22), 24175–24195 (2012). [CrossRef]   [PubMed]  

21. A.-Ö. Yöntem and L. Onural, “Integral imaging using phase-only LCoS spatial light modulators as Fresnel lenslet arrays,” J. Opt. Soc. Am. A 28(11), 2359–2375 (2011). [CrossRef]   [PubMed]  

22. S.-C. Kim, P. Sukhbat, and E.-S. Kim, “Generation of three-dimensional integral images from a holographic pattern of 3-D objects,” Appl. Opt. 47(21), 3901–3908 (2008). [CrossRef]   [PubMed]  

23. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef]   [PubMed]  

24. J.-Y. Jang, D. Shin, and E.-S. Kim, “Optical three-dimensional refocusing from elemental images based on a sifting property of the periodic δ-function array in integral-imaging,” Opt. Express 22(2), 1533–1550 (2014). [CrossRef]   [PubMed]  

25. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of 3D objects using a novel lookup table method,” Appl. Opt. 47(19), D55 (2008). [CrossRef]   [PubMed]  

26. F. Jin, J.-S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29(12), 1345–1347 (2004). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1: AVI (3087 KB)      Optically reconstructed 3-D scene

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Block diagram of the proposed system composed of four processes.
Fig. 2
Fig. 2 Optical configuration to generate the Fresnel off-axis hologram of a 3-D scene.
Fig. 3
Fig. 3 Reconstruction of a virtual 3-D scene from three-color hologram patterns.
Fig. 4
Fig. 4 Three-color DOIs reconstructed from their corresponding color holograms and synthesized DOIs on four neighboring depth planes: (a) R-color DOIs, (b) G-color DOIs, (c) B-color DOIs, (d) Synthesized full-color DOIs.
Fig. 5
Fig. 5 Optical geometry for capturing EIs from the DOIs reconstructed from three-color hologram patterns.
Fig. 6
Fig. 6 Optical configuration of the DPII-based 3-D display system.
Fig. 7
Fig. 7 Intensity and depth images of the test video for the 1st, 10th, 20th and 30th frames: (a) Color intensity images, (b) Depth images, (c) Depth distribution of the airplane and cloud image points of the 1st, 10th, 20th and 30th frames.
Fig. 8
Fig. 8 Structural diagram of the DPII-based 3-D display system and reconstructed 3-D scenes from EIAs for the 1st and 30th video frames.
Fig. 9
Fig. 9 Optically fabricated DPII-based 3-D display system by combined use of an IBM LCD monitor and a 295 × 184 lens array.
Fig. 10
Fig. 10 Optically reconstructed 3-D scenes viewed from the left, center and right directions for each case of the (a) 1st, (b) 10th, (c) 20th, and (d) 30th frames (see Visualization 1).

Tables (1)

Tables Icon

Table 1 System parameters for the holographic data and DPII-based 3-D display system.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

O( x,y )= z=1 N m=1 M a m r m exp[ j( k r m + φ m ) ]
r m = ( x x m ) 2 + ( y y m ) 2 + z m 2
O( x,y )= e jkz jλz z=1 N ξ,η u( ξ,η;z )exp{ j k 2z [ ( xξ ) 2 + ( yη ) 2 ] }
O( x,y )= z=1 N O z ( x,y ) = z=1 N u( x,y;z ) h z ( x,y ) = z=1 N F 1 { F[ u( x,y;z ) ]F[ h z ( x,y ) ] }
h z ( x,y )= e jkz jλz exp[ jk 2z ( x 2 + y 2 ) ]
R( x,y )=a R exp [ j( kx sin θ R ) ]
I( x,y )= z=1 N I z ( x,y ) = z=1 N | O z ( x,y )+R( x,y ) | 2 | O z ( x,y ) | 2 | R( x,y ) | 2 = z=1 N O z ( x,y ) R * ( x,y )+ O z * ( x,y )R( x,y )
O z ( x,y )= I z ( x,y ) R * ( x,y ) =[ O z ( x,y ) R * ( x,y )+ O z * ( x,y )R( x,y ) ] R * ( x,y ) = O z ( x,y ) a R 2 exp[ 2j( kxsin θ R ) ]+ O z * ( x,y ) a R 2
u( x,y;z )= O z ( x,y ) h z ( x,y )= F 1 { F[ O z ( x,y ) ]F[ h z ( x,y ) ] }
s z = d+( z1 )Δz g p
( s sx , s sy )=( p[ ( x 1 2 ) d+( z1 )Δz 2g ], p[ ( y 1 2 ) d+( z1 )Δz 2g ] )
( s ex , s ey )=( p[ ( x 1 2 )+ d+( z1 )Δz 2g ], p[ ( y 1 2 )+ d+( z1 )Δz 2g ] )
E I x,y =Z( . )I( . ) z=1 N u z [ s sx : s ex , s sy : s ey ]
EIA= x y E I x,y
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.