## Abstract

As a new approach for rapid generation of holographic videos, a so-called compressed novel-look-up-table(C-NLUT), which is composed of only two principal fringe patterns (PFPs) of baseline and depth-compensating PFPs (B-PFP, DC-PFP), is proposed. Here, the hologram pattern for a 3-D video frame are generated by calculating the fringe patterns for all depth layers only by using the B-PFP, and then transforming them into those for their depth layers by being multiplied with corresponding DC-PFPs. With this one-step calculation process, the computational speed (CS) of the proposed method can be greatly enhanced. Experimental results show that the CS of the proposed method has been improved by 30.2% on the average compared to that of the conventional method. Furthermore, the average calculation time of a new hybrid MC/C-NLUT method, in which both of motion-compensation (MC) and one-step calculation schemes are employed, has been reduced by 99.7%, 65.4%, 60.2% and 30.2%, respectively compared to each of the conventional ray-tracing, LUT, NLUT, and MC-NLUT methods. In addition, the memory size of the proposed method has been also reduced by 82 × 10^{6}-fold and 128-fold compared to those of the conventional LUT and NLUT methods, respectively.

© 2014 Optical Society of America

## 1. Introduction

Computer-generated hologram (CGH) is a digital hologram which can be generated by computing the interference pattern produced by object and reference beams [1,2]. In fact, the CGH has attracted many attentions in the field of electro-holographic three-dimensional (3-D) displays because it can correctly record and reconstruct the light waves of 3-D objects [1,2].

However, the enormous computational time involved in generation of CGH patterns has prevented them from being widely accepted in a 3-D display system [3]. Therefore, a number of algorithms for accelerating the computational speed have been proposed [3–16]. They include ray-tracing [1,2,4], look-up-table (LUT) [5], split look-up tables (S-LUT) [6], image hologram [7], recurrence relation [8,9], wave-front recording plane (WRP) [10], double-step Fresnel diffraction (DSF) [11] and polygon methods [12–16].

Moreover, recently a novel-look-up-table (NLUT) method was also proposed as another approach for fast generation of CGH patterns [17]. In this method, a 3-D object, which is modeled as a collection of self-luminous points of light, is approximated as a set of discretely sliced image planes having different depth, and only the fringe patterns of the center-located object points on each image plane, which are called principal fringe patterns (PFPs), are pre-calculated and stored. Then, the fringe patterns for other object points on each image plane can be generated just by shifting and adding processes of their PFPs without any additional calculations [17–19].

Here it must be noted that the NLUT, contrary to other methods, generates the CGH patterns of 3-D scenes based on a two-stage process [20–25]. In the first stage, which is referred to pre-processing, the number of object points to be calculated is minimized by removing as much of redundant object data between the consecutive 3-D video frames as possible by using the motion compensation-based video compression algorithms. This is the process to be uniquely performed in the NLUT method. Then, in the following stage, which is referred to main-processing, the CGH patterns for those compressed object data are directly calculated with the NLUT just like other methods do with their own algorithms.

Thus, the computational speed of the NLUT can be accelerated either by reducing the number of object points to be calculated, or by reducing the CGH calculation time itself. However, until now, most approaches have focused on the pre-processing of the first stage. That is, numerous attempts to eliminate the temporal redundancy between the consecutive 3-D video frames has been done by taking advantage of the shift-invariance property of the NLUT, which include temporal redundancy-based NLUT (TR-NLUT) [20], motion compensation-based NLUT (MC-NLUT) [21], MPEG-based NLUT (MPEG-NLUT) [22] and three-directional motion compensation-based NLUT (3DMC-NLUT) methods [23].

Here, shift-invariance is one of the unique properties of the NLUT, which implies the PFPs for the object points located on a depth layer of the 3-D object are same regardless of their locations on that depth layer [21,22]. Thus, in the NLUT, the hologram pattern for the object points on a depth layer can be generated by simply shifting the center-located PFP, which is referred to the reference PFP on that depth layer, to the location coordinates of those object points and adding them all together. Therefore, by applying a motion-estimation and compensation concept to the NLUT, a great number of redundant object data between the consecutive 3-D video frames can be removed, and only the difference images are applied to the NLUT for CGH calculation, which results in a significant reduction of the NLUT’ computational time involved in CGH generation.

Moreover, the computational speed of the NLUT can be further enhanced by decreasing the CGH calculation time for those compressed object data obtained from the pre-processing of the first stage. However, the conventional NLUT methods, which employ pre-processing schemes, must require two-step calculation processes for each 3-D video frame, which actually limits the computational speed [20]. In those methods, difference images between two consecutive video frames, which are obtained from the pre-processing, always consist of disappeared object points in the previous frame and newly added object points in the current frame. Thus, in order to generate the CGH pattern for the current video frame, the hologram pattern for the disappeared object points of the previous frame is calculated based on the shifting and adding process of the PFP, which is then subtracted from the CGH pattern of the previous frame. At the same time, the hologram pattern for the newly added object points of the current frame is also calculated based on the same process, which is then added to the CGH pattern of the previous frame. Thus, two operations of subtraction and addition are required for calculating the CGH patterns for each video frame in the conventional NLUT methods. In fact, this two-step CGH calculation process limits the computational speed of the conventional NLUT methods.

Accordingly, in this paper, a new type of the NLUT, which is so-called compressed novel-look-up-table (C-NLUT), to fast calculate the CGH patterns of 3-D video frames with only one-step process based on its unique thin-lens property, is proposed. The proposed C-NLUT is composed of only two PFPs, one is the baseline PFP (B-PFP) designated for the 1st depth layer of the 3-D video frame and the other is the depth-compensating PFP (DC-PFP) for compensating the depth differences between the baseline and other depth layers.

Basically, in the NLUT method, the PFPs for each depth layer are calculated as forms of Fresnel zone plates (FZPs), thus these can be treated as thin-lenses with different focal lengths corresponding to their depth layers. Therefore, based on this thin-lens property, hologram patterns for each depth layer of a 3-D video frame can be generated by calculating the hologram patterns for all depth layers only by using the B-PFP, and then transforming them into those for their depth layers just by being multiplied with corresponding DC-PFPs, which act just like thin-lenses having their focal lengths corresponding to depth differences between the baseline and other depth layers. In other words, the proposed method can calculate the CGH patterns of 3-D video frames just by one-step multiplication process instead of two-step subtraction and adding operations in the conventional methods, which results in a great acceleration of the computational speed of the NLUT.

Moreover, the proposed method is composed of only two PFPs such as B-PFP and DC-PFP unlike the conventional NLUTs, in which all PFPs for each depth layer of a 3-D video frame must be pre-calculated and stored. Thus, the memory capacity of the proposed method can be dramatically reduced down to the order of kilobytes (KB) from the order of megabytes (MB) of the conventional NLUT methods.

Consequently, the computational speed and the memory capacity of the proposed method expect to be greatly enhanced and massively reduced, respectively, compared to the conventional method since the CGH patterns are calculated based on the one-step process only with two PFPs in the proposed method. To confirm the feasibility of the proposed method, experiments with test 3-D video are performed and the results are compared to those of the conventional ray-tracing, LUT, NLUT and MC-NLUT methods in terms of the average number of calculated object points, the average calculation time per one object point and the memory capacity.

## 2. Unique properties of the NLUT

#### 2.1 NLUT method

Figure 1(a) shows a geometric structure to compute the Fresnel fringe pattern of a 3-D object with the NLUT method. Here, the location coordinate of the *p*^{th} object point at the *q*^{th} depth layer is specified by (*x _{p}*,

*y*,

_{p}*z*), and each object point is assumed to have an associated real-valued magnitude and phase of

_{q}*a*and

_{p}*φ*, respectively. Also, the CGH pattern of a 3-D object is assumed to be positioned on the depth plane of

_{p}*z*= 0 [17].

In fact, a 3-D object can be treated as a set of image planes discretely sliced along the *z*-direction, in which each image plane having a fixed depth is approximated as a collection of self-luminous object points of light. In the NLUT method, only the fringe patterns of the object points located on the centers of each image plane, which are called principal fringe patterns (PFPs), are pre-calculated, and stored [17]. Therefore, as seen in Fig. 1(a), the unity-magnitude PFP for the object point (*x*_{0}, *y*_{0}, *z _{q}*) located on the image plane with a depth of

*z*,

_{q}*T*(

_{q}*x*,

*y*) can be defined as Eq. (1) [17].

Where the wave number *k* is given by *k* = 2*π*/*λ*, where *λ* is the free-space wavelength of the light. Then, the fringe patterns for other object points on each image plane can be obtained by simply shifting this PFP according to the relative location values from the center to those object points.

Figure 1(b) shows a NLUT-based CGH generation process for two object point *B*(*x*_{1}, *y*_{1}, *z*_{1}) and *C*(-*x*_{2}, -*y*_{2}, *z*_{1}) located on the depth layer of *z*_{1}. In case of the object point *B*(*x*_{1}, *y*_{1}, *z*_{1}), which is displaced by (*x*_{1}, *y*_{1}) from the center point on the image plane of *z*_{1,} the CGH pattern for this object point can be obtained by simply shifting the PFP for the center object point with amounts of *x*_{1} and *y*_{1} along the directions of *x* and *y*, respectively, which is shown in Fig. 1(b). Following the same procedure mentioned above, the CGH pattern for the object point of *C*(-*x*_{2}, -*y*_{2}, *z*_{1}) located on the same image plane can be also obtained just shifting the PFP. Accordingly, this process is performed for all object points in all depth layers and then all shifted versions of PFPs are added together to get the final CGH pattern for an arbitrary 3-D object.

Accordingly, in the NLUT method, the CGH pattern of a 3-D object, *I*(*x*, *y*) can be expressed in terms of the shifted versions of PFPs of Eq. (1) as shown in Eq. (2) [17].

Where *P* and *Q* denote the number of object points on the *q*^{th} depth layer and the number of depth layers, respectively.

#### 2.2 Shift-invariance property of the NLUT

As mentioned above, the NLUT has a unique property of shift-invariance unlike other methods. With this property, the NLUT can generate the CGH patterns for each depth layer of a 3-D video frame based on simple shifting and adding processes of their PFPs. Figure 2 shows a schematic to show the shift-invariance property of the NLUT [21,22]. As seen in the left side of Fig. 2, an cube block is assumed to be located at *A*(0, 0, -*z*_{1}) in time *t*_{1} and moved to *B*(*x*_{1}, *y*_{1}, -*z*_{1}) in time *t*_{2}. Then, the calculated CGH pattern for the cube block of *A*, which is called CGH* _{A}*, and the reconstructed cube image from this CGH

*at the location of*

_{A}*A'*(0, 0,

*z*

_{1}) are shown at the center and the right-hand side of Fig. 2, respectively [22].

Moreover, the CGH pattern for the cube block moved to the location of *B*, which is called here CGH* _{B}*, can be obtained by simple shifting of the hologram pattern of CGH

*to the location of (*

_{A}*x*

_{1},

*y*

_{1}) without any calculations. At the same time, from this hologram pattern of CGH

*, which is regarded as a shifted version of CGH*

_{B}*, the corresponding cube image can be reconstructed at the shifted location of*

_{A}*B'*(

*x*

_{1},

*y*

_{1},

*z*

_{1}) on the right-hand side of Fig. 2. Thus, in the NLUT method, the CGH pattern for the moved object can be generated only with moving distances of the object without additional calculations by taking advantage of its shift-invariance property. Thus, if the motion vectors of the 3-D objects between two successive video frames of

*t*

_{1}and

*t*

_{2}are effectively extracted, the CGH pattern for the video frame of

*t*

_{2}can be generated just by shifting the pre-calculated CGH pattern for the video frame of

*t*

_{1}according to the extracted motion vectors.

Here, this shift-invariance property of the NLUT is directly matched to the motion estimation and compensation concept which has been widely employed in compression of two-dimensional (2-D) video data. Therefore, by applying this concept to the NLUT, 3-D object data can be also massively reduced by eliminating the redundant object data between the consecutive 3-D video frames, which results in a significant enhancement of the computational speed of the NLUT.

However, other methods including ray-tracing [1,2,4], LUT [5], split look-up tables (S-LUT) [6], image hologram [7], WRP [10] and DSF [11] methods do not have this kind of shift-invariance property. Therefore, a simple shifting and adding operation-based CGH calculation process cannot be applied to them on the hologram plane, which means CGH patterns for all object points must be calculated in those methods.

#### 2.3 Thin-lens property of the NLUT

As mentioned above, since the PFPs for each depth layer are calculated as forms of Fresnel zone plates (FZPs) in the NLUT, these can be treated as thin-lenses with different focal lengths corresponding to their depth layers [26].

Figure 3 shows the conceptual diagram of a thin-lens property of the NLUT. As seen in Fig. 3(a) and 3(b), object points *A* and *B* are reconstructed with PFP* _{A}* and PFP

*, respectively. Thus, PFP*

_{B}*and PFP*

_{A}*with focal lengths of*

_{B}*z*and

_{a}*z*, can be defined by Eqs. (3) and (4), respectively by using Eq. (1).

_{b}Here, if the PFP* _{B}* is sandwiched with the PFP

*to make a new composite PFP*

_{A}_{C}as seen in Fig. 3(c), the PFP

*,*

_{C}*T*(

_{C}*x*,

*y*) and its focal length

_{,}

*z*

_{c}can be represented by Eq. (5) and (6), respectively.

Equation (5) shows that the PFP* _{A}* with the focused depth plane of

*z*can be shifted to the new focused depth plane of

_{a}*z*just by being attached with the PFP

_{c}*having the focused depth plane of*

_{B}*z*. Here, in case the PFP having the positive focal-length such as a convex lens is attached, the focal length of the composite PFP gets decreased. On the other hand, for the PFP with the negative focal-length such as a concave lens, the corresponding focal-length of the composite PFP gets increased. Thus, with this thin-lens property of the NLUT, depth shifting of object points or a 3-D object can be possible.

_{b}However, other methods including ray-tracing [1,2,4], LUT [5], split look-up tables (S-LUT) [6], image hologram [7], WRP [10] and DSF [11] methods do not have this kind of thin-lens property because hologram patterns are independently calculated for each depth layer. Therefore, a simple multiplying operation-based CGH calculation process cannot be applied to them on the hologram plane.

## 3. Proposed method

Figure 4 shows an overall block-diagram of the proposed C-NLUT method, which is largely composed of three stages. At the first stage, intensity and depth differences between two consecutive (previous and current) frames of input 3-D video images are extracted. At the second stage, the CGH pattern for the first 3-D video frame is calculated by combined use of baseline PFP (B-PFP) and depth-compensating-PFP (DC-PFP). That is, the hologram patterns for the object points located on each depth layer of the first 3-D video frame are calculated only by using the B-PFP, which is designated to the PFP for the 1st depth layer, and they are then transformed into their original depth layers by being multiplied with respective DC-PFPs. The CGH pattern for the first 3-D video frame can be finally obtained by accumulating all those hologram patterns calculated for each depth layer.

Moreover, for the remaining 3-D video frames, only the hologram patterns for changed object points in both previous and current video frames are compensated. That is, intensity and depth changes of the object points can be compensated just by multiplying their intensity differences and corresponding DC-PFPs, respectively to the hologram patterns for the changed object points. In the third stage, the calculated CGH patterns for each video frame are transmitted to the CGH video output as well as stored in the previous frame buffer of the CGH for computing the CGH pattern for the next 3-D video frame.

#### 3.1 Construction of the proposed C-NLUT

The proposed C-NLUT is composed of only two PFPs; one is the B-PFP, which is referred to the PFP for the 1st depth layer of the 3-D scene, and the other is the DC-PFP, which is used for compensation of depth differences between the baseline (the 1st depth layer) and other depth layers. As mentioned above, the PFPs are calculated as forms of FZPs, therefore they can be treated as thin lenses with different focal lengths corresponding to their depth differences since each PFP can make the focused object point just like a thin-lens does [26]. Here, only the difference is that a PFP operates based on diffraction-optics instead of refraction-optics on which a thin-lens does.

Therefore, the PFPs for each depth layer can be calculated from the B-PFP by being multiplied with the DC-PFPs having the focal lengths corresponding to the depth differences between the baseline and other depth layers. In other words, all those PFPs need not to be pre-calculated and stored just like the conventional NLUT method, but just directly generated by combined use of B-PFP and DC-PFP during the CGH calculation process, which may also confirm the least memory usage of the proposed method.

For ray-optical analysis of the proposed method, if a thin-lens having the focal-length of *Δz _{a}* is attached to the thin-lens having the focal-length of

*z*

_{1}, the resultant focal-length of the combined lens system becomes

*z*

_{2}according to the ABCD matrix as shown in Eq. (7) and (8)

Likewise, in the NLUT, the unity-magnitude PFP for the object point (*x*_{0}, *y*_{0}, *z _{n}*), which is positioned on the center of the

*n*

^{th}object plane with the depth of

*z*,

_{n}*T*(

_{n}*x*,

*y*) can be defined as Eq. (1). Just like attaching a thin-lens, if the DC-PFP (

*T*) having the depth distance of

_{DC}*Δz*is multiplied to the B-PFP (

_{DC}*T*) having the depth distance of

_{B}*z*, which is equivalent to the PFP for the 1st depth layer, the resultant composite PFP (

_{B}*T*

_{2}) can be expressed by Eq. (9)

As seen in Eq. (9), the focusing depth of the *2*^{nd} PFP (*T*_{2}) becomes 1/*z*_{2} = 1/*z*_{B} + 1/*Δz _{DC}* just like the thin-lens case of Eq. (8). Since Eq. (9) represents the PFP for the

*2*

^{nd}depth layer, the PFP and the depth distance for the

*n*

^{th}depth layer,

*T*and

_{n}*z*can be expressed by Eqs. (10) and (11), respectively.

_{n}Here, Eqs. (10) and (11) show that just by multiplying the DC-PFP to the PFP for the previous depth plane, the PFP for the current depth plane can be generated. Moreover, since Eq. (10) can be expressed by Eq. (12), it is confirmed that the *n*^{th} PFP (*T _{n}*) can be calculated just by multiplying the DC-PFP (

*T*) to the (

_{DC}*n*-1)

^{th}PFP (

*T*

_{n-}_{1}).

Therefore, in the proposed C-NLUT method, all PFPs for each depth layer of a 3-D video frame are not stored just like in the conventional NLUT, but generated by combined use of B-PFP (*T _{B}*) and DC-PFP (

*T*) with Eq. (12) based on the thin-lens property of the NLUT. Moreover, contrary to the conventional method, in which two calculations of subtraction and adding are required for generation of CGH patterns for the movements of object points along the depth direction, only one-step calculation of multiplication is required. That is, the CGH patterns for each depth layer of a 3-D video frame can be generated by calculating the hologram patterns for all depth layers only with the B-PFP (

_{DC}*T*), and then transforming them into those for their depth layers just by being multiplied with the DC-PFP (

_{B}*T*), which results in a great acceleration of the computational speed.

_{DC}#### 3.2 Non-uniform depth quantization of a 3-D scene

In general, depth values are uniformly quantized in a 3-D scene, which means that the quantization step (*Δz*) is the same for all depth range. Here, the distance from the viewing-plane to one point on a 3-D object is assumed to be *z*. Then, for all objects in a 3-D scene, the depth values are within *z _{min}* <

*z*<

*z*, where

_{max}*z*and

_{min}*z*are the distances to the nearest and the farthest object in a 3-D scene, respectively. In general, depth data are stored as inverted data

_{max}*D*as shown in Eq. (13) [27]

On the other hand, it can be intuitively understood that exact representation of depth values is very important for the near object whereas small degradations in depth values are mostly well tolerated by the human visual system (HVS) for the far object [27–30]. Therefore, non-uniform quantization would be appropriate for using depth information such as compression and display.

Recently, O. Stankiewicz proposed a non-uniform depth quantization method for efficient compression of depth information in 3-D video conference in MPEG standard [30]. In the method, Eq. (14) is used as a non-uniform depth transform function [27,30]

Where, *a*, *D*_{max}, *E* and *E*_{max} mean quantization parameter (QP) value, maximum value of *D*, transformed depth value and maximum value of *E*, respectively. Here, the QP is automatically chosen by encoder with the base QP for the depth, and sent to decoder in the encoded bit stream to decode the encoded data. With this method, bitrate reduction was reported to be up to 4% while there was no measurable increase of complexity [27].

In this paper, another non-uniform transform function by considering the HVS is employed, which is shown in Fig. 5. Figure 5(a) shows the geometry of stereopsis. According to the Fig. 5(a), the depth quantization step (*Δz*) and the perceived distance at the depth value of *z _{n}* can be expressed by Eq. (15) and (16), respectively [31].

Where *d* and *z* denote the distance between two eyes and the distance between the viewer and the object. In addition, *θ* means the stereo-acuity representing the smallest detectable depth difference in binocular vision. Thus, in this paper, to determine the reasonable depth quantization step in Eqs. (15) and (16), *θ* is set to be 10 arcsec considering the human depth-perception sensitivity which ranges on the order of 10-15 arcsec for various spatio-temporal frequencies of stimulation [31–33].

In order to use Eq. (16) as a non-uniform function, *z _{B}* and

*Δz*of Eq. (11) should be reasonably determined. That is, the perceived-depth of the HVS and the focusing depth of the proposed method should be matched by controlling the parameters in Eq. (11). Figure 5(b) shows focusing-depth and perceived-depth dependences on the depth values. The black line in Fig. 5(b) represents the focusing-depth distance of

_{DC}*z*depending on the depth layer of

_{n}*n*. In Fig. 5(b),

*z*and

_{B}*Δz*of Eq. (11) are set to be 500

_{DC}*mm*and −617,000

*mm*, respectively. That is, the PFP having the negative focal length is used as a DC-PFP. The red line with red dots in Fig. 5(b) shows the perceived-depth distance of

*z*depending on the depth quantization step of

_{n}*Δz*using Eqs. (15) and (16). As seen in Fig. 5(b), two lines look almost same for all depth regions, which mean that the focusing-depth of the proposed method appears to be directly matched with that of the HVS in depth recognition of a 3-D scene.

Therefore, in this paper, depth images are quantized with non-uniform depth steps for efficient generation of 3-D video holograms and their reconstruction while satisfying the nonlinear depth quantization characteristic of the composite PFP system [27–31]. Figure 6(b) and 6(c) show the depth images quantized with uniform and non-uniform depth steps, respectively. As seen in Fig. 6, the depth image of Fig. 6(c) looks slightly brighter than that of Fig. 6(b), because the depth step for the near object gets much smaller than that of the far object. In this paper, the non-uniform depth image of Fig. 6(c) as well as the intensity image of Fig. 6(a) is used as input 3-D image data.

#### 3.3 Generation of CGH patterns with the proposed method

### 3.3.1 Generation of the CGH pattern for the first 3-D video frame

In the NLUT, the CGHs for the object points located on each object plane can be obtained just by shifting the PFPs according to the relative location values from the center to those object points and adding them all, therefore the CGH pattern of a 3-D object, *I*(*x*, *y*) can be expressed by Eq. (17) [17].

Where *N*, *M _{z}* and

*I*represent the number of depth layers and the number of object points for the depth layer of

_{z}*z*and the hologram pattern for the depth layer of

*z*, respectively.

Figure 7(a) shows a conceptual diagram of the proposed method, in which an image space is divided by *N* numbers of depth layers with non-uniform depth steps using Eq. (11) along the longitudinal direction and the hologram pattern (*I _{Bn}*) for the depth layer of

*z*is generated using

_{n}*T*

_{B}.

Then, as seen in Fig. 7(b), the hologram pattern *I _{n}* for the object points on the depth layer of

*z*, can be calculated with the PFP generated from Eq. (10) and given by Eq. (18).

_{n}Where *M _{n}* and

*I*represent the number of object points on the depth plane of

_{Bn}*z*, and the hologram pattern for that depth plane generated using the B-PFP (

_{n}*T*) with the depth of

_{B}*z*, respectively. As seen in Eq. (18),

_{B}*I*is expressed by

_{n}*T*times

_{DC}^{n}*I*, thus

_{Bn}*I*can be calculated by multiplying the DC-PFP (

_{n}*T*) to

_{DC}*I*by

_{Bn}*n*-times. Therefore, the final CGH pattern for all depth planes can be generated by using Eqs. (17) and (18).

### 3.3.2 Generation of the hologram pattern for other 3-D video frames

Figure 7(c) also shows another case in which a 3-D object moves an amount of (*o*-*n*)*Δz* along the depth direction, which means the object moves from the depth layer of *z _{n}* to that of

*z*. Then, the hologram pattern

_{o}*I*for the moved object can be calculated just by multiplying the DC-PFP (

_{o}*T*) to

_{DC}*I*by (

_{n}*o*-

*n*)-times as follows.

Generally, for 3-D video images, three kinds of changes of object points such as intensity, depth and both of them, may occur between two consecutive video frames. For example, here we assume four object points of *A*(*x*_{1}, *y*_{1}, *z*_{1}), *B*(*x*_{1}, *y*_{1}, *z*_{1}), *C*(*x*_{1}, *y*_{1}, *z*_{2}) and *D*(*x*_{1}, *y*_{1}, *z*_{2}) with amplitudes of *a*_{1}, *a*_{2}, *a*_{1} and *a*_{2,} respectively.

Then, the CGH pattern for the object point *A* can be represented by Eq. (20).

Where *T*_{1} represents the PFP for the depth layer of *z*_{1}. For the case that the object point *A* moves to *B* on the same depth plane of *z*_{1}, only the intensity value is changed from *a*_{1} to *a*_{2}, so the CGH pattern for the object point *B* can be given by

Hence, if the object point *A* longitudinally moves to *C* with the same intensity value, only the depth intensity value is changed from *z*_{1} to *z*_{2}, therefore the CGH pattern for the object point *C* can be given by

Moreover, for the case that the object point *A* moves to *D*, both of intensity and depth values are changed from *a*_{1} to *a*_{2} and *z*_{1} to *z*_{2}, respectively, thus the CGH pattern for the object point *D* can be calculated by

## 4. Experiments and the results

To confirm the feasibility of the proposed method, experiments are performed. As a test video, 60 frames of 3-D images each having a resolution of 500 × 400 pixels are generated with the 3DS MAX, and five of them are shown in Fig. 8. The sequential views of the test 3-D scene are composed of a fixed ‘House’ and a moving ‘Car’ around the house with different depth and these are captured by panning the camera from right to left. Since the objects move in a rotation motion, their perspectives may vary frame by frame, which result in large changes in intensity and depth data of the objects between the consecutive video frames.

Figure 9 shows the difference images obtained between two consecutive video frames of test 3-D video images using the MC-NLUT method. As seen in Fig. 9, relatively large amounts of the difference images between the consecutive video frames were extracted.

In the experiments, CGH patterns with a resolution of 1,600 × 1,600 pixels, in which each pixel consists of 10*μm* × 10*μm*, are generated with intensity and depth data of the test video of Fig. 8. Moreover, the viewing-distance and the discretization step in the horizontal and vertical directions are set to be 600*mm* and 30*μm*, respectively, which means the amount of pixel shifts is given by 3 pixels in the proposed method [17]. Thus, to fully display the hologram patterns, the PFP must be shifted by 1,500(500 × 3) and 1,200 (400 × 3) pixels along the horizontal and vertical directions, respectively. Hence, the total resolution of the PFP becomes 3,100(1,600 + 1,500) × 2,800(1,600 + 1,200) pixels [17].

Figure 10 shows 3-D object images reconstructed from the CGH patterns, which are generated with the proposed method at each focusing-depth of 575 *mm* and 590 *mm*.

Now, the computed comparison results on the numbers of calculated object points and the calculation times for one object point of the conventional and proposed methods are shown in Fig. 11, and details of those results are also summarized in Table 1.

As seen in Fig. 11, the original NLUT has the largest number of calculated object points among them because all object points on each video frame get involved in CGH calculation. On the other hand, the number of calculated object points of the MC-NLUT gets reduced compared to those of the NLUT because this method compute the CGH patterns only for the difference images between the consecutive video frames.

Once the number of calculated object points has been obtained, the CGH patterns for them are then calculated. As explained above, the proposed C-NLUT can calculate the CGH patterns only with one-step process contrary to the conventional method requiring two-step process. Thus, by combined use of this proposed C-NLUT with the conventional MC-NLUT methods, the computational speed can be greatly enhanced. Here, this new type of the NLUT employing both of motion-compensation and one-step calculation schemes, is called hybrid MC/C-NLUT, in which the number of calculated object points is reduced by using the MC-NLUT methods based on the motion estimation and compensation process, and then the CGH pattern for these compressed object points is calculated with the proposed C-NLUT based on the one-step calculation process.

As seen in Fig. 11(a), the number of calculated object points of the hybrid MC/C-NLUT has been reduced to half of that of the MC-NLUT, because only one set of changed object points on each frame gets involved in CGH calculation in the C-NLUT while the conventional MC-NLUT method calculates the CGH pattern for two sets of changed object points located on both of the previous and current frames. Reduction of the number of calculated object points in the proposed method also causes the corresponding decreases of their calculation times for one object point as shown in Fig. 11(b).

Table 1 shows average numbers of calculated object points and calculation times for one object point computed for 60 video frames for each case of the conventional and proposed methods. Here in the experiment, a PC system employing an Intel Pentium Core^{TM} i7 operating at 3.4 GHz, a main memory of 8 GB and an operating system of Microsoft Windows 7 as well as the Matlab 2012 are used.

As seen in Table 1, the average numbers of calculated object points of the conventional LUT and NLUT methods become same with that of the ray-tracing method because all object points must be calculated in all those methods. Furthermore, the average number of calculated object points of the MC-NLUT method has been reduced by 42.8% because only the changing parts between the motion-compensated object image of the previous frame and the object image of the current frame are calculated with the two-step process. However, the average number of calculated object points of the proposed method has been further reduced by 70.6% because only the changing parts between them are calculated with the one-step process.

On the other hand, the average calculation times have been estimated to be 1479.6*ms*, 10.7*ms*, 9.3*ms*, 5.3*ms* and 3.7*ms*, respectively for each of the conventional ray-tracing, LUT, NLUT, MC-NLUT and hybrid MC/C-NLUT methods as seen in Table 1. In other words, the average calculation time of the hybrid MC/C-NLUT method has been reduced by 99.7%, 65.4%, 60.2% and 30.2%, respectively compared to each of the conventional ray-tracing, LUT, NLUT, and MC-NLUT methods.

In fact, the direct impact of the C-NLUT on the conventional method can be analyzed just by comparing the numbers of calculated object points and the calculation times for one object point between the conventional MC-NLUT and the hybrid MC/C-NLUT methods. As seen in Table 1, the number of calculated object points and calculation time for one object point of the hybrid MC/C-NLUT has been reduced by 48.5% and 30.2%, respectively compared to those of the conventional MC-NLUT. These results reveal that the computational speed of the conventional NLUT methods can be improved by 30.2% just by employing the C-NLUT in the CGH calculation process.

Furthermore, it can be analyzed that 60.2% improvements of the hybrid MC/C-NLUT in computational speed, compared to the conventional NLUT method, have come from two processes. That is, 30% of them were contributed by the motion-compensation process of the MC-NLUT and the remaining 30.2% were contributed from the one-step calculation process of the proposed C-NLUT.

In addition, Table 1 also shows the comparison results on the required memory capacities among the conventional LUT and NLUT and proposed C-NLUT methods. The memory capacity of the proposed method has been dramatically reduced down to the order of kilobytes (KB) from those of megabytes (MB) and terabytes (TB) of the conventional LUT and NLUT, respectively because only two PFPs have been stored here in the proposed C-NLUT. In other words, the memory capacity of the proposed C-NLUT has been reduced by 82 × 10^{6}-fold and 128-fold compared to those of the conventional LUT and NLUT methods, respectively.

## 5. Conclusions

In this paper, a new type of the NLUT, which is called C-NLUT, has been proposed for fast one-step generation of holographic videos based on its unique thin-lens property. The proposed method, which is composed of only PFPs of baseline and depth-compensating PFPs, can calculate the CGH for object points with one-step process for each 3-D video frame. It results in a great enhancement of the computational speed of the conventional NLUT. Experimental results reveal that the computational speed of the proposed C-NLUT method has been improved by 30.2% on the average compared to that of the conventional NLUT. Moreover, the average calculation time of the hybrid MC/C-NLUT has been reduced by 99.7%, 65.4%, 60.2% and 30.2%, respectively compared to each of the conventional ray-tracing, LUT, NLUT, and MC-NLUT methods. In addition, the memory size of the proposed method has been reduced by 82 × 10^{6}-fold and 128-fold compared to those of the conventional LUT and NLUT methods, respectively.

## Acknowledgment

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2013-067321).

## References and links

**1. **C. J. Kuo and M. H. Tsai, *Three-Dimensional Holographic Imaging* (John Wiley & Sons, 2002).

**2. **T.-C. Poon, *Digital Holography and Three-dimensional Display* (Springer Verlag, 2007).

**3. **K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. **185**(10), 2742–2757 (2014). [CrossRef]

**4. **R. Oi, K. Yamamoto, and M. Okui, “Electronic generation of holograms by using depth maps of real scenes,” Proc. SPIE **6912**, 69120M (2008). [CrossRef]

**5. **M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging **2**(1), 28–34 (1993). [CrossRef]

**6. **Y. Pan, X. Xu, S. Solanki, X. Liang, R. Tanjung, C. Tan, and T. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express **17**(21), 18543–18555 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-17-21-18543. [CrossRef] [PubMed]

**7. **H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper DWC4.

**8. **H. Yoshikawa, S. Iwase, and T. Oneda, “Fast Computation of Fresnel Holograms employing Difference,” Proc. SPIE **3956**, 48–55 (2000). [CrossRef]

**9. **K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. **39**(35), 6587–6594 (2000). [CrossRef] [PubMed]

**10. **T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. **34**(20), 3133–3135 (2009). [CrossRef] [PubMed]

**11. **N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express **21**(7), 9192–9197 (2013), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-7-9192. [CrossRef] [PubMed]

**12. **T. Tommasi and B. Bianco, “Frequency analysis of light diffraction between rotated planes,” Opt. Lett. **17**(8), 556–558 (1992). [CrossRef] [PubMed]

**13. **N. Delen and B. Hooker, “Free-space beam propagation between arbitrarily oriented planes based on full diffraction theory: a fast Fourier transform approach,” J. Opt. Soc. Am. A **15**(4), 857–867 (1998). [CrossRef]

**14. **K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A **20**(9), 1755–1762 (2003). [CrossRef] [PubMed]

**15. **H. Sakata and Y. Sakamoto, “Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space,” Appl. Opt. **48**(34), H212–H221 (2009). [CrossRef] [PubMed]

**16. **K. Yamamoto, Y. Ichihashi, T. Senoh, R. Oi, and T. Kurita, “Calculating the Fresnel diffraction of light from a shifted and tilted plane,” Opt. Express **20**(12), 12949–12958 (2012), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-20-12-12949. [CrossRef] [PubMed]

**17. **S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of 3-D objects using a novel look-up table method,” Appl. Opt. **47**, D55–D62 (2008). [CrossRef] [PubMed]

**18. **S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express **20**(11), 12021–12034 (2012), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-20-11-12021. [CrossRef] [PubMed]

**19. **D.-W. Kwon, S.-C. Kim, and E.-S. Kim, “Memory size reduction of the novel look-up-table method using symmetry of Fresnel zone plate,” Proc. SPIE **7957**, 79571B (2011). [CrossRef]

**20. **S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of 3-D video holograms by combined use of data compression and look-up table techniques,” Appl. Opt. **47**, 5986–5995 (2008). [CrossRef] [PubMed]

**21. **S.-C. Kim, X.-B. Dong, M.-W. Kwon, and E.-S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express **21**(9), 11568–11584 (2013), http://www.opticsinfobase.org/oe/abstract.cfm?&uri=oe-21-9-11568. [CrossRef] [PubMed]

**22. **X.-B. Dong, S.-C. Kim, and E.-S. Kim, “MPEG-based novel-look-up-table method for accelerated computation of digital video holograms of three-dimensional objects in motion,” Opt. Express **22**, 8047–8067 (2014), http://www.opticsinfobase.org/oe/abstract.cfm?&uri=oe-22-7-8047. [CrossRef] [PubMed]

**23. **X.-B. Dong, S.-C. Kim, and E.-S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express **22**(14), 16925–16944 (2014), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-22-14-16925. [CrossRef] [PubMed]

**24. **S.-C. Kim, W.-Y. Choe, and E.-S. Kim, “Accelerated computation of hologram patterns by use of interline redundancy of 3-D object images,” Opt. Eng. **50**(9), 091305 (2011). [CrossRef]

**25. **S.-C. Kim, K.-D. Na, and E.-S. Kim, “Accelerated computation of computer-generated holograms of a 3-D object with N×N-point principle fringe patterns in the novel look-up table method,” Opt. Lasers Eng. **51**(3), 185–196 (2013). [CrossRef]

**26. **B. E. A. Saleh and M. C. Teich, *Fundamentals of Photonics* (Wiley, 2nd edition, 2007).

**27. **O. Stankiewicz, K. Wegner, and M. Domański, “Nonlinear depth representation for 3D video coding,” IEEE The International Conference on Image Processing (ICIP, 2013), pp. 1752–1756. [CrossRef]

**28. **P. Aflaki, M. M. Hannuksela, D. Rusanovskyy, and M. Gabbouj, “Nonlinear depth map resampling for depth-enhanced 3-D video coding,” IEEE Signal Process. Lett. **20**(1), 87–90 (2013). [CrossRef]

**29. **I. Feldmann, O. Schreer, and P. Kauff, Proceedings of 4th Workshop on Digital Media Processing for Multimedia Interactive Services, E. Izquierdo (ed.), World Scientific, 2003, pp. 433–438.

**30. **M. Domański, T. Grajek, D. Karwowski, K. Klimaszewski, J. Konieczny, M. Kurc, A. Łuczak, R. Ratajczak, J. Siast, O. Stankiewicz, J. Stankowski, and K. Wegner, “Technical Description of Poznan University of Technology proposal for Call on 3D Video Coding Technology,” ISO/IEC JTC1/SC29/WG11, Doc. M22697, Geneva, CH, Nov. 2011.

**31. **W. J. Plesniak, “Incremental update of computer-generated holograms,” Opt. Eng. **42**(6), 1560–1571 (2003). [CrossRef]

**32. **R. Patterson, “Spatio-temporal properties of stereoacuity,” Optom. Vis. Sci. **67**(2), 123–128 (1990). [CrossRef] [PubMed]

**33. **R. Patterson, “Human factors of 3-D displays,” J. Soc. Inf. Disp. **15**(11), 861–872 (2007). [CrossRef]