Abstract

A new curved hologram-based rotational-motion compensation (CH-RMC) method is proposed for accelerated generation of holographic videos of 3-D objects moving on the random path with many locally different arcs. All of those rotational motions of the object made on each arc can be compensated, just by rotating their local curved holograms along the curving surfaces matched with the object’s moving trajectory without any additional calculation process, which results in great enhancements of the computational speed of the conventional hologram-generation algorithms. Experiments with a test video scenario reveal that average numbers of calculated object points (ANCOPs) and average calculation times for one frame (ACTs) of the CH-RMC-based ray-tracing, wavefront-recording-plane and novel- look-up-table methods have been found to be reduced by 73.10%, 73.84%, 73.34%, and 68.75%, 50.82%, 66.59%, respectively, in comparison with those of their original methods. In addition, successful reconstructions of 3-D scenes from those holographic videos confirm the feasibility of the proposed system.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Thus far, the electro-holographic display based on computer-generated holograms (CGHs) has attracted many attentions in the field of realistic three-dimensional (3-D) display and television broadcasting because CGHs can precisely record and reconstruct the light waves of 3-D objects in motion [1–4]. This electro-holography, however, suffers from a couple of challenging issues [5–8]. One of them is the unavailability of a large-scale high-resolution spatial- light-modulator (SLM) to optically reconstruct the holographic data into a 3-D video since the hologram resolution is in the order of the light wavelength [6]. The other one is the enormous computation-time involved in generation of CGH patterns [7,8].

In the classical method called a ray-tracing (RT) algorithm [9], a 3-D object to be generated is modeled as a collection of self-luminous object points of light. The fringe patterns for those object points are then directly calculated with a couple of optical interference and diffraction equations, and added up together to obtain the whole interference pattern for the 3-D object. This approach, however, requires direct one-by-one calculation of the fringe patterns per each object point per hologram sample, thus it takes a lot of time to calculate the hologram patterns, which may prohibit the real-time generation of holographic videos of 3-D objects in motion [9–11].

To overcome this drawback, a number of CGH algorithms accelerating the computational speed were proposed, which include the look-up-table (LUT) method [12], wavefront recording plane(WRP) method [13,14], polygon method [15–17], split look-up-table (S-LUT) method [18], image hologram method [19], recurrence relation method [20], double-step Fresnel method [21], GPU-based method [22,23], and so on. Recently, a novel-look-up-table (NLUT) was also proposed as another approach for the fast CGH generation of the 3-D object [24–26]. In this method, a 3-D object is approximated as a set of discretely sliced image planes having different depth, and only the fringe patterns of the center-located object points on each image plane, which are called principal-fringe-patterns (PFPs), are pre-calculated and stored. The fringe patterns for other object points on each image plane are then generated just by shifting and adding processes of corresponding PFPs without any additional calculations based on a unique shift-invariant property of the PFP [24].

Here it must be noted that the NLUT customarily generates the CGH patterns of a 3-D object in motion based on a two-step process such as pre and main-processes unlike other CGH algorithms [24–26]. In the pre-process, the number of object points to be calculated can be minimized just by removing as much of redundant object data between the consecutive 3-D video frames as possible by newly employing various motion estimation and compensation schemes used in most digital communication systems [4,19]. This data compression process of the NLUT method can be obtained based on a shift-invariance property of the PFP. In the following main-process, the CGH patterns only for those compressed object data can be calculated with simple repetitive shifting and adding operations of the corresponding PFPs [24,25]. Thus, the computational speed of the NLUT method can be greatly enhanced not only by reducing the number of calculated object points in the pre-process, but also by shortening the CGH calculation time itself in the main-process.

There are several types of NLUTs employing this pre-process for eliminating the temporal redundancy between the two consecutive video frames, which include the temporal redundancy-based NLUT(TR-NLUT) [27], motion compensation-based NLUT(MC-NLUT) [28], MPEG-based NLUT(MPEG-NLUT) [29] and three-directional motion compensation- based NLUT(3D-MC-NLUT) [30] methods. It is certain that the computational speed of these NLUT methods has been greatly enhanced, but they still have operational limitations in their practical applications. As mentioned above, all those NLUT methods employing the motion estimation and compensation process can be operated based on the shift-invariance property of the PFP [24–26]. This property, however, allows only the x, y and z-directional motion vectors to be estimated and compensated. Thus, these methods might be very effective in situations where 3-D objects move in linear motion with small depth variations [28–31].

However, in the real world, most 3-D objects move in free motion. This free motion of an object cannot be effectively compensated with those NLUT methods since the object moves along an arbitrary curving trajectory with many locally-different arcs. In other words, in those methods, image differences between the two consecutive video frames happen to be beyond 50% for this freely-moving object, which means that introduction of the motion estimation and compensation process to the NLUT method may not be useful anymore [28–31]. Of course, the NLUT method can deal with this freely-moving 3-D object just by increasing the frame rate very high. But, the object data to be estimated and compensated, as well as the resultant number of object points to be calculated, turn out to be sharply increased even though the image differences between the two consecutive video frames can be kept less than 50%.

Thus, in this paper, we propose a curved hologram-based rotational-motion compensation (CH-RMC) method based on a concept of rotation-invariance of the curved hologram for highly-accelerated generation of holographic videos of a 3-D object randomly moving along a curved path with many locally-different arcs. That is, in the proposed method, all those rotational motions of the 3-D object made on every arc can be directly compensated just by rotating their local curved holograms on the curving surfaces matched with the trajectories on which the object moves. Thus, with this CH-RMC process, most of those hologram patterns of a 3-D object in rotational motion can be directly generated without any additional calculation process, which results in a dramatic reduction of the overall calculation time of the holographic video. In principle, the proposed method can be applied to any kinds of conventional CGH algorithms including the RT, WRP and NLUT methods for enhancing their computational speed, which would be one of the most important features of the proposed method.

To confirm the feasibility of the proposed method, experiments with a test 3-D object moving in random motion on the ground are carried out and the results are discussed. Operational performances of the conventional ray-tracing (RT), wave-front recording plane (WRP) and novel-look-up-table (NLUT) methods with or without employing the proposed CH-RMC method are comparatively discussed in terms of the average number of calculated object points (ANCOP) and average calculation time for one frame (ACT). In addition, operational performances of the NLUT method employing the proposed CH-RMC process are also discussed with those of the conventional linear-motion compensation-based NLUTs such as MC-NLUT, MPEG-NLUT and 3DMC-NLUT methods.

2. Proposed method

2.1 A basic concept of rotation-invariance of the curved hologram

Several NLUT methods employing motion estimation and compensation processes such as the TR-NLUT, MC-NLUT, MPEG-NLUT and 3D-MC-NLUT methods [27–30], can be operated just by taking advantage of the shift-invariance property of the PFPs of the NLUT method [24,25]. These methods, however, can work only for the 3-D object moving in linear motion since hologram patterns are calculated as planar forms, as well as shifting and adding operations of the PFPs are performed on the planar surfaces in those methods.

On the other hand, in the real world, most 3-D objects may move not on the straight paths, but on the curved paths with many locally-different arcs, which means that those objects’ motions cannot be properly compensated in the conventional NLUT methods operating based on the linear motion compensation process on the planar surfaces. In other words, in those NLUT methods, image differences between the two consecutive video frames happen to be beyond 50% for those objects in random motion. It means that motion estimation and compensation processes may not be effective in those NLUT methods.

Thus, in this paper, as a new approach for comprehensive compensation of rotational motions of a 3-D object and acceleration of the resultant computational speed of the CGH algorithms, a concept of rotation-invariance of the curved hologram is proposed. In principle, the rotational-motion of a 3-D object can be compensated just by rotating the curved hologram right on the arced surface where the 3-D object moves. Therefore, in the proposed method, a polar-coordinates system is used to illustrate the locations of the object, curved hologram, and reconstructed image, respectively. In the polar-coordinates system, two parameters such as angle (θ) and radius (r) are necessary to define the locations of the object, its hologram pattern and reconstructed image. As seen in Figs. 1 (a) and (b) representing the top and side views of the rotation compensation process, respectively, a 3-D object locating at the coordinate of (r1, θ1) rotates with an angle of θ to the position of (r1, θ2), then the curved hologram for that object can be synchronized with this motion by being rotated with the same angle.

 figure: Fig. 1

Fig. 1 Conceptual diagram of the rotational-motion compensation process of a moving object with the curved hologram on the rotating surface.

Download Full Size | PPT Slide | PDF

Figure 1 clearly shows the mutual relationship among the object and curved holograms. Initially, object and curved holograms are assumed to be O(r1, θ1) and H(r2, θ1), respectively. When the object is rotated with an angle of θ along the clockwise direction, the curved hologram of the rotated object can be simply obtained by rotating the previous curved hologram of H(r1, θ1) with the angle of θ along the same curved surface where the object moves, which may represent the rotational motion-invariance property of the curved hologram on the arced surface of the object. The relationship between the curved holograms of the initial and rotated locations can be expressed as Eq. (1).

H(r2,θ2)=H(r2,θ1+θ)

Here, two circles belonging to the curved hologram and rotational-motion of the object have the same centers, but different radii, thus rotational-motion compensation of the object can be achieved just by rotating the previous curved hologram in accordance with the object motion. It must be noted that the curved hologram of the previous frame and its rotated version could be almost overlapped on the curved surface due to the rotational-motion compensation of the object. Thus, the overlapped part of the previous frame can be reused for the curved hologram of the current frame without any additional calculation process, which results in a great reduction of the computational time of the hologram pattern. Therefore, based on this rotation-invariant property of the curved hologram on the moving surface of the object, a new curved hologram-based rotational-motion compensation (CH-RMC) method is proposed for highly accelerating the computational speed of the conventional CGH algorithms.

2.2 Overall functional block-diagram of the proposed method

Figure 2 shows the overall functional diagram of the proposed CH-RMC method, which composed of seven processes. Here, a 3-D object is assumed to move along a curved path with many locally-different arcs on the ground, where center coordinates and radii of the local arcs can be determined based on a simple geometric relationship of ‘three points on the same plane can determine a circle’.

 figure: Fig. 2

Fig. 2 Overall functional block-diagram of the proposed CH-RMC method.

Download Full Size | PPT Slide | PDF

In the first step, a hologram pattern for the 1st-frame image of the 3-D object, which is called an original planar hologram of the 1st-frame (OPH1), is generated at a fixed distance from the object with one of the conventional CGH algorithms. In the second step, the rotational angle of the object between the 1st and 2nd-frames is extracted with the estimated center location and radius of the local arced circle along which the object moves between those two frames. In the third step, the OPH1 is propagated to the position corresponding to the local arced circle, which is called a local planar hologram of the 1st-frame (LPH1), and it is transformed into its curved version, which is called a local curved hologram of the 1st-frame (LCH1), based on the planar hologram-to-curved hologram (PH-to-CH) conversion method.

In the fourth step, the LCH1 is put on the curving surface matched with the moving path of the object and rotated with the extracted rotational-angle. In the fifth step, the overlapped region of the LCH1 with the rotating surface of the object is transformed into the local planar hologram of the 2nd-frame (LPH2) based on the curved hologram-to-planar hologram (CH-to-PH) conversion method and then back-propagated to the location of the original planar hologram, which turns out to be used as most part of the 2nd-frame OPH (OPH2). Moreover, in the sixth step, the remaining small blank part of the OPH2 corresponding to the non-overlapped region is calculated with one of the CGH algorithms, and in the final step, possible errors between the compensated and actual 2nd-frame holograms are to be corrected.

Figure 3 also illustrates a video scenario of the 3-D car object in random motion to show how the proposed CH-RMC method works according to the seven-step process of Fig. 2. As seen in the top-view of the scenario of Fig. 3, the car is assumed to move along the curving path with three locally-different arcs from the location of P1(x1, z1) to the location of P4(x4, z4) on the ground of the x-z plane, and OPHs to be generated are set to be located at the z = 0 plane, which are colored with dark grey. In this scenario, three sets of circle centers and radii can be estimated, which are denoted as (O1(x01, z01), r1), (O2(x02, z02), r2) and (O3(x03, z03), r3), and those three kinds of local circles are colored with blue, red and green, respectively. With this scenario, a seven-step process of the proposed CH-RMC method can be illustrated in detail as follows.

 figure: Fig. 3

Fig. 3 Operational process of the proposed CH-RMC method for a 3-D car object freely- moving on the ground of the x-z plane.

Download Full Size | PPT Slide | PDF

2.2.1 Generation of the 1st-frame original planar hologram

Initially, the 1st-frame OPH of the car object moving on the blue arc, which is designated as B-OPH1, is generated with one of the conventional CGH algorithms. Here, three kinds of conventional CGH algorithms such as RT, WRP and NLUT are employed.

In the RT method, a 3-D object to be generated is modeled as a collection of self-luminous points of light. Fringe patterns for all those object points are then calculated with optical diffraction and interference equations and added up together to obtain the whole interference pattern of a 3-D object. In the WRP method, the CGH pattern of the object is generated based on a two-step process. A diffraction pattern on the wave-front plane being located very near to the object is calculated, which is called a wave-front recording plane (WRP), and then propagated to the final CGH plane with Fresnel diffraction integrals. Moreover, in the NLUT method, a 3-D object is approximated as a set of discretely sliced image planes having different depth. Only the fringe patterns of the center-located object points on each image plane, which are called principal fringe patterns (PFPs), are pre-calculated, and stored in the NLUT. The CGH pattern of the object is then generated just by simple shifting and addition operations of those PFPs based on a shift-invariant property of the PFP.

With these three methods, not only the 1st-frame OPH of the object mentioned above, but also small parts of the 2nd-frame OPH corresponding to the blank area, and hologram patterns for different object points between the compensated and actual images, are to be calculated. Of course, these methods employ the proposed CH-RMC method to compensate the rotational motion of the object and at the same time to generate most part of the 2nd-frame OPH from that of the 1st-frame.

2.2.2 Extraction of curving-motion parameters of the 3-D object

For the scenario of a car object moving along the curved path with three locally-different arcs of Fig. 3, three sets of five curving-motion parameters of the object for each of the blue, red and green circles are to be extracted for operating the CH-RMC process, which include center locations and radii of those local circles such as (O1(x01, z01), r1), (O2(x02, z02), r2) and (O3(x03, z03), r3), vertical and horizontal distances between the OPHs and LPHs such as (d1, l1), (d2, l2) and (d3, l3), and rotational angles between the two consecutive frames such as (θ1-1, θ1-2, …, θ1-n), (θ2-1, θ2-2, …, θ2-n) and (θ3-1, θ3-2, …, θ3-n), respectively.

For instance, Fig. 4 illustrates an extraction process of curving-motion parameters on the blue arc whose center location and radius are given by O1(x01, z01) and r1, respectively. Here, video images on the blue arc are assumed to be composed of n-frames such as P1-1(x1-1, z1-1), P1-2(x1-2, z1-2), ..., P1-n (x1-n, z1-n). Now, movements of the car object between the two consecutive video frames can be described by its rotational motions whose motion vectors are given by their respective rotational angles of θ1-1, θ1-2, …, and θ1-n.

 figure: Fig. 4

Fig. 4 Schematic diagram for extraction of the curving-motion parameters of the car object on the blue arc.

Download Full Size | PPT Slide | PDF

In fact, based on a simple mathematical principle of ‘three object points on the same plane can determine a circle’, the circle function of the blue-colored local arc and its center location and radius can be determined with three object points of P1-1, P1-2 and P1-3 as seen in Fig. 4. Then, in the following frame, the distance between the object and circle center is compared with the circle radius to figure it out if the object still moves on the same blue arc. In case the distance from the object to the center becomes equal to the arc radius, the object is assumed to still be on the same arc and its rotational-motion compensation can be performed with the same local curved and planar holograms, otherwise those processes must be carried out on a new rotational-motion geometry of the red arc of Fig. 3.

When the center location of the blue arc is given, the position of the local planar hologram of the 1st-frame, which is called B-LPH1, can be determined since the radius (R) of the curved hologram is set to be constant. In general, the position of the B-LPH1 may not be matched with that of the 1st-frame original planar hologram of the object (B-OPH1), thus the vertical and horizontal distances of d1 and l1 between the B-LPH1 and B-OPH1 need to be calculated to propagate the B-OPH1 to the local plane of the B-LPH1. In addition, a set of rotational angles of θ1-1, θ1-2, …, θ1-n acting as motion vectors of the object between the two successive frames, can be calculated with combined use of the center location of the local circle and positions of the object in the two successive frames.

For instance, the rotational-angle of θ1-1 can be calculated from the triangle made with three points of O1, P1-1 and P1-2. As seen in Fig. 4, the height of this triangle, which is denoted by h, can be given by h = A/P1-1P1-2, where A is the area of this rectangle [32]. Here, the aimed angle θ1-1 is separated into two angle components of θA and θB being intersected with this height, where θ1-1, θA and θB can be given by Eqs. (2), (3) and (4), respectively.

θ11=θA+θB
θA=arccos(hO1P11)
θB=arccos(hO1P12)

2.2.3 Transformation of the LPH into its corresponding LCH

Even though there have been many research works on the curved or cylindrical holograms [33–35], one of the critical issues in those research fields seems to be how fast to compute those non-planar hologram patterns. Thus, in this paper, a new conversion method between the curved and planar holograms, which includes both of the planar hologram-to-curved hologram (PH-to-CH) and curved hologram-to-planar hologram (CH-to-PH) conversions, is proposed for fast calculation of the curved and planar hologram patterns.

Figure 5 shows a top-view of the conceptual diagram of the proposed PH-to-CH conversion method on the blue arc, where red and blue dots represent the pixels of the local planar and curved holograms, respectively. Here, those lines connecting the red and blue dots are set to be perpendicular to the planar hologram to make it sure that the sampling interval in the curved hologram becomes equal to that in the planar hologram along the horizontal direction.

 figure: Fig. 5

Fig. 5 Conceptual diagram of the proposed PH-to-CH and CH-to-PH conversion processes.

Download Full Size | PPT Slide | PDF

In fact, both of the PH-to-CH and CH-to-PH conversions can be implemented with the same process. As seen in Fig. 5, the local planar hologram of the B-LPH1 is diffracted to the local curved hologram of the B-LCH1 according to Eq. (5), where the blue and red arrows indicate the PH-to-CH and CH-to-PH conversions, respectively.

B-LCH1=y=1Lx=1MB-LPH1exp(ikz(x,y))

In Eq. (5), k ( = 2π/λ) denotes the wave number, where λ means the wavelength of the light, and L and M also denote numbers of pixels of the B-LPH1 along the horizontal and vertical directions, respectively. In addition, z(x, y) represents the distance between the pixels of the B-LPH1 and B-LCH1 at the position of (x, y). Here it must be noted that when the B-LCH1 is located much closer to the B-LPH1, lights coming from each pixel of the B-LPH1 traverse very small regions in the B-LCH1, which means that each light may enclose only one pixel of the B-LCH1, then leading to the massive reduction of the computational load.

According to the Nyquist sampling theorem, the maximum spatial-frequency fmax can be given by fmax = (2p)−1, where p represents the sampling pitch. According to the grating equation, a relationship between the maximum spatial-frequency of fmax and maximum diffraction angle of αm can be given by the equation of sin(αm) = λfmax. Therefore, as seen in Eq. (6), the maximum width (W) of the diffracted region of one pixel of the B-LPH1 can be described by combined use of the propagation distance of z and diffraction angle of αm.

W=ztan(αm)=ztan[arcsin(λ2p)]

The detailed transformation process from the planar hologram to the curved hologram can be described by Eqs. (7)-(10), where a conversion table of Tconv(x, y) is defined by Eq. (7) which contains pre-calculated complex differences between the curved and planar holograms.

Tconv(x,y)=exp(ikz(x,y))

In Eq. (7), z (x, y) representing the propagation distance from one pixel of the B-LPH1 to that of the B-LCH1, can be given by Eq. (8).

z(x,y)=z0R+[R2(wh2x)2]

Where R and wh denote the radius of the curved hologram and width of the hologram, respectively, and z0 represents the maximum perpendicular distance between the planar and curved holograms, which is given by Eq. (9).

z0=RR2(wh2)2

Thus, the PH-to-CH conversion can be regarded as a process of multiplying the conversion table of Eq. (7) to the complex amplitude of the planar local hologram of the B-LPH1, which is given by Eq. (7).

ABLCH1(x,y)=ABLPH1(x,y)Tconv(x,y)

Where AB-LCH1(x, y) and AB-LPH1(x, y) represent the complex amplitudes of the curved and planar holograms, respectively.

2.2.4 Rotational-motion compensation of the object with the LCH

As seen in Fig. 6, on the blue circle, the object moves to the position of P1-2 from the position of P1-1 between the 1st and 2nd-frames with the rotational angle of θ1-1. Now, the B-LCH1 is put on the curving surface matched with the moving path of the object and rotated with this extracted rotational-angle of θ1-1. Then, most region of the B-LCH1 would be overlapped with that of the 2nd-frame local curved hologram of the B-LCH2. In other words, this rotational motion-compensated version of the B-LCH1 between the 1st and 2nd-frames can be used as most part of the B-LCH2 without any additional calculation process.

 figure: Fig. 6

Fig. 6 Conceptual diagram of the rotational-motion compensation process.

Download Full Size | PPT Slide | PDF

Of course, as seen in Fig. 6, even though the rotational motion-compensated version of the B-LCH1 can act as most part of the B-LCH2 of the object, it still has a small blank area corresponding to the non-overlapped region of the B-LCH1 with the B-LCH2. Thus, hologram pattern for the blank part of the B-LCH2 needs to be calculated, which is to be discussed in the following section. In general, the radius (R) of the B-LCH is set to be constant whereas the radius of the local arc where the object moves may be changed depending on the input scenario. Thus, the size of the non-overlapped region can be mostly determined by the rotational motion of the object. As the radius of the B-LCH reduces, the corresponding size of the non-overlapped region can be made smaller, but the B-LCH also need to have a small curvature to guarantee its pixels to be located to be near to the pixels of the B-LPH. Moreover, the size of the blank area can be reduced just by increasing the number of sampling frames to decrease the rotational-angle between the two consecutive frames, which however results in a great increase of the total computational time for the whole video scenario. Thus, there might be a tradeoff between the sampling number of frames and size of the non-overlapped region.

2.2.5 Transformation of the LCH back into its corresponding LPH

The rotational motion-compensated B-LCH1, which represents the most part of the B-LCH2, is then transformed back into its corresponding 2nd-frame local planar hologram of the B-LPH2 based on the CH-to-PH conversion process. Here, the CH-to-PH conversion is just an inverse process of the PH-to-CH conversion where its conversion table is given by Eq. (11).

T'conv(x,y)=exp(ik(-z(x,y)))

In Eq. (11), -z (x, y) denotes the inverse-propagation distance from one pixel of the curved hologram to that of the planar hologram at the position of (x, y). Thus, Eq. (11) can be transformed into the form of Eq. (12) representing the inverse form of Eq. (7).

T'conv(x,y)=1/exp(ikz(x,y))=1/Tconv(x,y)

Thus, the B-LPH2 can be obtained just by dividing the B-LCH2 with the conversion table of Eq. (7), which is given by Eq. (13).

ABLPH2(x,y)=AB-LCH2(x,y)T'conv(x,y)=ABLCH2(x,y)/Tconv(x,y)

Where AB-LCH2(x, y) and AB-LPH2(x, y) represent the complex amplitudes of the 2nd-frame local curved and planar holograms, respectively.

2.2.6 Calculation of the original planar hologram for the non-overlapped region

After most part of the B-LPH2 is generated from the rotational motion-compensated version of the B-LPH1, it then propagates back to the plane of the original planar hologram with the vertical and horizontal distances of d1 and l1 between the B-OPH2 and B-LPH2, which may act as the compensated part of the B-OPH2 with its previous version of the B-OPH1. Actually, most part of the B-LCH2 may be overlapped with the B-LCH1, which leads to generation of a large area of the B-OPH2 just by being compensated with the B-OPH1 while leaving a small blank area to be calculated. This blank area of the B-OPH2 needs to be generated with one of the CGH algorithms.

In fact, the B-OPH2 is composed of the overlapped and non-overlapped regions with the rotational motion-compensated version of the B-OPH1 as seen in Fig. 6, where the hologram pattern for the non-overlapped region needs to be calculated using the CGH algorithms.

2.2.7 Error correction between the compensated and actual hologram patterns

Even though the rotational-motions of the object between the two consecutive frames can be compensated based on the proposed CH-RMC method, the compensated object image of the 1st-frame may not happen to be properly matched with the actual object image of the 2nd-frame. Thus, the similarity between the compensated and actual object images between the two consecutive frames needs to be estimated with a cost-function parameter of the SNR, which is defined by Eq. (14).

SNR=10log10{x=1Pxy=1PyN(x,y)2/x=1Pxy=1Py[N(x,y)M(x,y)]2}

Where Px and Py denotes the width and length of the object image, and N(x, y) and M(x, y) represent the pixels to be compared in the compensated and actual object image of the 2nd-frame, respectively. N(x, y) can be obtained just by rotating the original object image composed of intensity and depth data with the estimated angle and center of the motion circle, which can be easily carried out with 3-D data of the object extracted from the intensity and depth images. Here, a proper threshold value of the SNR is to be set in order to determine whether the compensated object image looks similar enough with the actual object image or not. Thus, in case the estimated SNR gets bigger than this threshold value, the compensated hologram can be regarded as the actual hologram of the 2nd-frame. Otherwise, the compensated hologram needs to be corrected, which can be done by calculating the hologram pattern for those different object points and being added to the compensated hologram pattern.

3. Experiments

3.1 Overall configuration of the experimental setup

Figure 7 shows an overall experiment setup of the proposed system, which is composed of digital and optical processes. In the digital process of Fig. 7(a), the hologram pattern for the input 3-D scene of the 1st-frame is initially generated with the conventional CGH algorithms, and other hologram patterns of the following frames can be generated based on the proposed CH-RMC method. For the comparative performance analysis, three kinds of conventional CGH algorithms such as ray-tracing (RT), wave-front recording plane (WRP) and novel-look-up-table (NLUT) methods, as well as three kinds of conventional linear-motion compensation-based NLUT versions such as MC-NLUT, MPEG-NLUT and 3DMC-NLUT [28–31] are used for generation of the holographic video for the input video scenario with a ‘Car’ object freely-moving around a fixed ‘House’ object on the ground.

 figure: Fig. 7

Fig. 7 Overall configuration of the experimental setup for digital and optical processes of the proposed CH-RMC-based system.

Download Full Size | PPT Slide | PDF

In the optical process of Fig. 7(b), all those holograms are reconstructed on the optical 4-f lens system, where two dichromatic convex (Model: 63-564 Edmond Optics) whose focal lengths are 15cm, are used. A green laser (G-laser) (StradusTM532 VORTRAN Laser Technology) is also used as the light source, which is collimated and expanded by the laser collimator (LC) and beam expander (BE) (Model: HB-4XAR.14, Newport), then illuminated onto the SLM where the calculated hologram pattern is loaded. In the experiments, a reflection-type amplitude-modulation mode SLM (Model: HOLOEYE LC-R-1080) with the resolution of 1920 × 1200 pixels, pixel-pitch of 8.1µm is employed. In the experiments, as a test video scenario, an input 3-D scene with 80 frames of video images, where a ‘Car’ object moves around the fixed ‘House’ along a curved path with three locally-different arcs, is generated with the 3DS MAX, which is shown in Fig. 8. Each 3-D video image has 256 depth planes and each depth plane is composed of 320 × 240 pixels, as well as sampling rates in the x-y plane and on the z-direction, are set to be 1mm and 0.1mm, respectively.

 figure: Fig. 8

Fig. 8 (a) Configuration of the test 3-D video scenario with 80 video frames; (b) Top-view of the test scenario.

Download Full Size | PPT Slide | PDF

As seen in Fig. 8(b), a ‘3-D Car’ moves along three locally-different arcs of C1 (1st-20th frames), C2 (21st-50th frames) and C3 (51st-80th frames) with their own circle centers and radii of O1, O2, O3 and r1, r2, r3, respectively. These local arcs of C1, C2 and C3 are also colored with blue, red and green, respectively. Three kinds of ‘Car’ images corresponding to three consecutive frames in each arc of C1, C2 and C3, which are located at (P1-1, P1-2, P1-3), (P2-1, P2-2, P2-3) and (P3-1, P3-2, P3-3), respectively, are used to determine their respective circle centers and radii. Centers of those original planar holograms are set to be located at the origin of the coordinate of (x0, z0), as well as local planar and curved holograms are set to be on their circles centered at O1, O2 and O3, respectively.

3.2 Generation of the 1st-frame hologram pattern

At the first step, for the test video scenario of Fig. 8, six kinds of original planar holograms of the 1st-frames with the resolution of 1920 × 1200 pixels and pixel pitch of 8.1μm are generated with each of the conventional RT, WRP and NLUT methods, as well as the MC-NLUT, MPEG-NLUT and 3DMC-NLUT methods, which are shown in Fig. 9. As mentioned above, in the 1st-frame of the input scene, the center of the moving ‘Car’ object is set to be located at (46.42cm, 28.69cm) in the x-z plane, where the ‘Car’ has a depth distance of 28.69cm from the OPH plane being located at the origin, and the z-directional depth of the fixed ‘House’ is ranged from 26.48cm to 27.21cm.

 figure: Fig. 9

Fig. 9 Calculated CGH patterns for the 1st-frame B-OPH of the test video scenario with each of the (a) RT, (b) WRP, (c) NLUT, (d) MC-NLUT, (e) MPEG-NLUT, and (f) 3DMC-NLUT methods.

Download Full Size | PPT Slide | PDF

In the RT method, the B-OPH1 is generated by calculating the CGH patterns for all object points based on Fresnel diffraction integrals, whereas in the WRP method, a WRP putting on the 1st-depth layer of the ‘Car’ object is calculated first based on the RT method, and then diffracted to the OPH plane to generate the B-OPH1 with Fresnel diffraction integrals. In addition, in the NLUT, MC-NLUT, MPEG-NLUT and 3DMC-NLUT methods, a set of PFPs for 256 depth planes whose size and pixel pitch are 2240 × 1440 ((1920 + 320) × (1200 + 240)) pixels and 8.1µm, respectively, is pre-calculated and stored. CGH patterns of the ‘Car’ object is then calculated just by multiplying each of the object intensities to its corresponding Fresnel fringe patterns properly tailored from the pre-calculated PFPs for all depth planes, and adding them together.

As seen in Fig. 9, six kinds of original planar holograms of the 1st-frame (OPHs1) generated with each of the RT, NLUT, WRP, MC-NLUT, MPEG-NLUT and 3DMC-NLUT methods on the blue arc of C1 look almost same because Fresnel diffraction integrals were employed to calculate their holograms in all methods.

3.3 Extraction of curving-motion parameters

As seen in Fig. 8, the ‘Car’ in an input video scenario moves along the curved path with three different arced circles whose radii and center locations are calculated to be r1 = 31.36cm, r2 = 73.48cm, r3 = 35.06cm, and O1(20.52cm, 46.38cm), O2(14.36cm, 85.74cm), O3(1.31cm, 64.89cm), respectively, through three sets of three object points of P1-1 to P1-3, P2-1 to P2-3 and P3-1 to P3-3, whose location coordinates are given by P1-1(46.42cm, 28.69cm), P1-2(46.29cm, 28.51cm), P1-3(46.14cm, 28.44cm), and P2-1(31.35cm, 25.23cm), P2-2(31.14cm, 25.09cm), P2-3 (30.92cm, 25.01cm), and P3-1(1.02cm, 24.89cm), P3-2(0.81cm, 24.95cm) P3-3(0.62cm, 25.02cm), respectively. Positions of the LPHs for three kinds of arcs colored with blue, red and green, are determined by their respective arc centers.

For the proper position-adjustments between the OPHs and LPHs, x and z-directional distance differences between the OPHs and their corresponding LPHs are calculated to be l1 = 20.52cm and d1 = 31.89cm for the blue arc, l2 = 14.36cm and d2 = 71.26cm for the red arc and l3 = 1.31cm and d3 = 50.41cm for the green arc, respectively. Here, those calculated distance values are equivalent to the x and z-coordinates of the LPHs because their corresponding OPHs are set to be located on the origin.

Since each radius of those three LPHs is fixed to be 14.5cm here, the maximum distance of z0 defined by Eq. (9) is calculated to be 209um. Thus, d1 equals to be 31.90cm ( = z01-R + z0 = 46.38cm-14.5cm + 209um), and l1 becomes x01 = 20.52cm. Likewise, d2, l2 and d3, l3 are calculated to be 71.26cm, 14.36cm, and 50.41cm, 1.31cm, respectively. In addition, three sets of rotation angles of the object which are made between the two consecutive video frames on each of the blue, red and green circles, are calculated to be θ1-1 = 0.301°, θ1-2 = 0.304°, …, θ1-n = 0.303°, and θ2-1 = 0.301°, θ2-2 = 0.302° …, θ2-n = 0.306°, and θ3-1 = 0.302°, θ3-2 = 0.305°, θ3-n = 0.302°, respectively, from Eqs. (2)-(4).

3.4 Transformation of local planar holograms to their curved versions

As mentioned above, positions of the OPHs being located at the origin are propagated to those of the corresponding LPHs, which is called position-adjustment process, as seen in Fig. 10(a). Figure 10(b) shows the original planar hologram of the B-OPH1 and its position-adjusted version of the B-LPH1 with two distance parameters of l1 = 20.52cm and d1 = 31.89cm for the blue arc case.

 figure: Fig. 10

Fig. 10 (a) Position-adjustment processes from the OPHs to their corresponding LPHs, (b) B-OPH1 and its position-adjusted version of the B-LPH1, (c) PH-to-CH conversion process for the blue circle case: (c-1) B-LPH1, (c-2) Conversion table of Tconv and (c-3) B-LCH1.

Download Full Size | PPT Slide | PDF

These LPHs are then transformed into their curved versions of LCHs using the PH-to-CH conversion method for performing the proposed CH-RMC operations. As mentioned above, this PH-to-CH conversion can be processed just by being multiplied with the conversion table of Eq. (7). Since the maximum weight of W defined by Eq. (6) is set to be the pixel pitch of the hologram where the sampling pitch of p and wavelength of λ are given by 8.1μm and 532nm, respectively, the distance of z equals to be 246μm. It means that the distance between the pixels of the LPHs and LCHs becomes 246μm at most, thus z0 can equal to be 209μm as seen in Fig. 5. By substituting these parameters into Eq. (9), the radius of the local curved hologram, R is calculated to be 14.5cm. Here, the maximum distance between the pixels of the planar and curved holograms is 209μm, thus it may be guaranteed that the diffraction region of one pixel on the LPH can enclose only one pixel on the LCH, ensuring that the transformed LCH may not be deteriorated in terms of the quality of the reconstructed image.

Figure 10(c) shows an example of the PH-to-CH conversion process from the B-LPH1 to B-LCH1 with the conversion table of Tconv, which was pre-calculated and stored as a function of z(x, y) representing the propagation distance from one pixel of the B-LPH1 to the corresponding pixel of the B-LCH1 according to Eq. (7). The conversion table is set to be equal in size with the hologram because the PH-to-CH conversion can be done simply by being multiplied with this conversion table. Here, the conversion table contains complex amplitudes of the wave-front diffracted from the planar hologram surface to the curved hologram surface. As seen in Fig. 10(c), those complex values of three columns in Fig. 10(c-2) are calculated to be 0.45 + i0.89, −0.60 + i0.80, and −0.98- i0.17 for each case of z = 2.19e-7m, 4.37e-7m and 6.54e-7m. Each column of the conversion table of Fig. 10(c-2) contains the same complex values since holograms are to be curved along the horizontal direction. Thus, all those LCHs for each video frame have the same curvatures, which mean that the PH-CH conversion process can be done with a single conversion table.

3.5 Curved hologram-based rotational-motion compensation

CH-RMC operations are carried out for 80 frames of the test video scenario. For instance, as seen in Fig. 11(a), with this proposed CH-RMC process, almost 95.16% of the B-LCH1 has been found to be overlapped with the B-LCH2, which means that 95.16% of the rotational motion-compensated version of the B-LCH1 can be reused as the corresponding part of the B-LCH2 without any additional calculation process. Thus, only the hologram pattern for 4.84% blank part of the B-LCH2 needs to be calculated using the CGH algorithms. In fact, the ratio of the blank region to the overlapped region has been found to be only about 6%, on the average, which means that there should be a massive reduction of the overall CGH calculation time for the input video scenario.

 figure: Fig. 11

Fig. 11 (a) Rotational-motion compensation of the ‘Car’ object with the B-LCH1 between the 1st and 2nd-frames, (b) CH-to-PH conversion process for the blue circle case: (b-1) B-LCH2, (b-2) Conversion table of Tconv and (b-3) B-LPH2.

Download Full Size | PPT Slide | PDF

Moreover, this ratio can be given by the rotational angle between the two consecutive frames. That is, the rotation of the B-LCH1 whose radius is R = 14.5cm with a rotational angle of θ1-1 = 0.3° = 0.0052 radian between the 1st and 2nd-frames is just equivalent to the shifting of the B-LCH1 with an arc length of 759µm ( = θ1-1 × R), which can be also measured as the shifting of 93 pixels in the B-LCH1. For instance, the B-LCH2 with a resolution of 1920 × 1200 pixels, which is composed of the compensated part of 1827 × 1200 pixels with the B-LCH1 and blank part of 93 × 1200 pixels, is converted into the B-LPH2, and then propagated back to the origin of the x-z plane to generate the B-OPH2.

3.6 Transformation of the curved holograms to their planar versions

The transformation of the B-LCH2 into its planar version of the B-LPH2 can be done with the PH-to-CH conversion process, which is equivalent to the inverse process of the PH-to-CH conversion. As seen in Fig. 11(b-1), the B-LCH2 consists of the compensated region with the B-LCH1 based on the CH-RMC process and blank region. Now, only the compensated part of the B-LCH2 is transformed into its corresponding region of the B-LPH2 by being divided with its conversion table. Figures 11(b-2) and (b-3) show the conversion table and transformed B-LPH2, respectively. Thus, as seen in Fig. 11(b), the B-LCH2 can be transformed into its corresponding B-LPH2 just by being divided by the same conversion table of Eq. (7) used in the PH-to-CH conversion process.

3.7 Generation of the 2nd-frame hologram

As mentioned above, the B-OPH2 representing the 2nd-frame original planar hologram of the input scene consists of the compensated part of 1827 × 1200 pixels from the B-LPH1 based on the proposed CH-RMC process and calculated part of 93 × 1200 pixels with the 2nd-frame input scene. In fact, the compensated part of the B-OPH2 is obtained from the B-LPH2 by being propagated back to the origin of the x-z plane with the position-adjustment process as seen in Fig. 12(a). This propagation process from the position of the B-LPH2 to that of the B-OPH2 is carried out based on the FFT-based Fresnel diffraction equation with extracted distance values of l1 = 20.52cm and d1 = 31.89cm. That is, the Fourier-transformed version of the B-LPH2 is multiplied with the angular propagation function determined by the horizontal and vertical distances of l1 and d1, and then inverse-Fourier transformed into the corresponding B-OPH2.

 figure: Fig. 12

Fig. 12 Generation process of the B-OPH2: (a) Position-adjustment process from the LPHs to OPHs, (b) Generated B-OPH2 with both of the compensated and calculated hologram patterns.

Download Full Size | PPT Slide | PDF

In addition, the hologram pattern for the blank region of the B-OPH2 is calculated with each of the NLUT, RT and WRP methods, as well as the MC-NLUT, MPEG-NLUT and 3D-MC-NLUT methods. For the cases of the NLUT-based methods, the horizontal width of the blank region of 97 pixels, which are ranged from 1823 to 1920 pixels, must be carefully considered when tailoring the hologram patterns for each object point from their PFPs. Figure 12(b) shows the final B-OPH2 obtained just by combining the compensated hologram from that of the previous frame and newly calculated hologram together.

3.8 Error correction between the compensated and actual input images

As mentioned above, the similarity between the compensated and actual images is measured in terms of the SNR by setting its threshold value of 30dB [36]. This high threshold value may make it sure that the compensated image looks similar enough with the actual image and the holographic video could be reconstructed without image distortions.

Figure 13(a-1) shows an example of the difference image between the compensated and actual images at the 2nd-frame, which is also called an error image. The SNR value at the 2nd-frame is calculated to be 28.39dB, which means that the compensated image looks a little bit different from the actual image and requires an error-correction process. Figures 13(b-1), (b-2) and (b-3) represent the compensated hologram of the B-OPH2 based on the proposed CR-RMC method, calculated hologram for those different object points between the compensated and actual images of Fig. 13(a-1) and error-corrected version of the B-OPH2, respectively.

 figure: Fig. 13

Fig. 13 (a) Difference images at the 2nd-frame between the (a-1) Compensated and actual images, (a-2) Error-corrected and actual images, (b) Error-correction process: (b-1) Generated B-OPH2 based on the CH-RMC method, (b-2) Calculated hologram for the error image, (b-3) Error-corrected version of the B-OPH2.

Download Full Size | PPT Slide | PDF

Here, the error correction can be carried out just by adding the calculated hologram for the error image of Fig. 13(b-2) to the compensated hologram of Fig. 13(b-1). With this error-correction process, the compensated hologram based on the CH-RMC method can be made almost matched with that of the actual input image. Figure 13(a-2) shows the difference image between the error-corrected and actual images, which looks virtually black, representing no difference in object points between them. Of course, when the SNR is estimated to be higher than the threshold value of 30dB, the error-correction process may be ignored.

4. Performance analysis of the proposed method

4.1 For the CH-RMC-based RT, WRP, and NLUT methods

Table 1 shows experimental results on the operational performances of the CH-RMC-based RT, WRP and NLUT methods in terms of the average calculation time per frame (ACT) and average number of calculated object points per frame (ANCOP) for the test video scenario of Fig. 8. For comparison, experimental results on the ACTs and ANCOPs of the original RT, WRP and NLUT methods are also included.

Tables Icon

Table 1. ACTs and ANCOPs of original and CH-RMC-based RT, WRP, and NLUT methods for test video scenario.

As seen in Table 1, in all methods, great improvements in their computational speed have been achieved when the proposed CH-RMC method was employed. For the test video scenario of Fig. 8, ANCOPs of the original RT, WRP, and NLUT methods have been estimated to be 1,208, whereas those of the CH-RMC-based RT, WRP, and NLUT methods have been massively reduced in all cases. As seen in Table 1, for the case of the CH-RMC-based RT, the ANCOP has been reduced down to 325 from 1,208 due to the rotational-motion compensation process, which means 73.10% decrease of the ANCOP in comparison with its original method, and from which the resultant CGH computational time of the CH-RMC-based RT method expects to be greatly reduced.

For the cases of the CH-RMC-based WRP and NLUT methods, their ANCOPs have been also reduced down to 316 and 322, respectively, from 1,208 similarly like the case of the CH-RMC-based RT method, which means 73.84% and 73.34% decreases of their ANCOPs in comparison with those of the original WRP and NLUT methods. Thus, ANCOPs of those CH-RMC-based RT, WRP and NLUT methods have been reduced down to 73.43%, on the average, in comparison with those of their original methods, from which their respective computational speed expects to be greatly enhanced.

Furthermore, ACTs of the original RT, WRP, NLUT methods have been estimated to be 424.83s, 7.72s and 26.61s, whereas those of the CH-RMC-based RT, WRP, NLUT methods have been shortened to 132.70s, 3.90s and 8.89s, respectively, which means 68.75%, 50.82% and 66.59% decreases of their respective ACTs in comparison with those of the original RT, WRP and NLUT methods. In other words, all those CH-RMC-based RT, WRP and NLUT methods have achieved 62.05% decrease of the ACTs, on the average, by being compared with those of their original methods, from which their respective computational speed can be enhanced by much more than two times. Here it must be noted that all those CGH methods employing the CH-RMC algorithm show similar improvements in both of the ANCOP and ACT because those methods employ the same rotational-motion compensation processes.

In practice, each ACT of the CH-RMC-based RT, WRP and NLUT methods consists of six components of calculation times such as parameters-extraction (PE), position-adjustments (PA), PH-to-CH & CH-to-PH conversions (CON), rotational-motion compensation (RMC), hologram calculation for the blank area (HC) and error-correction (EC), respectively.

As seen in Table 1, calculation times of the CH-RMC-based RT, WRP and NLUT methods for the PE processing, where five parameters used for the rotational-motion compensation, such as rotational angles, distances between the OPHs and LPHs along the x, y-directions, centers and radii of the local arcs, are extracted, have been found to be 33.61ms, 34.54ms and 33.72ms, respectively, which correspond to 0.03%, 0.89% and 0.38% of the total ACTs of those methods, respectively. Those results look very little times much less than 1%, and almost similar each other since the same PE operations were performed in all methods.

Likewise, calculation times taken for the PA, CON and RMC processes have been also found to be 269.21ms, 270.17ms, 269.58ms, and 21.13ms, 20.79ms, 21.36ms, and 11.92ms, 11.47ms 11.36ms, respectively, in each of the CH-RMC-based RT, WRP and NLUT methods, which correspond to 0.2%, 6.93%, 3.03% and 0.016%, 0.53%, 0.24%, and 0.01%, 0.29%, 0.13% of the total ACTs of those methods, respectively. These results also look like very little times and almost similar each other in those processes because the same calculation processes were carried out in them.

On the other hand, calculation times taken for the HC processing, where CGH patterns for the blank regions of the B-OPHs are calculated, have been found to be 22.12s, 1.45s and 1.42s, in the CH-RMC-based RT, WRP and NLUT methods, respectively, which correspond to 16.67%, 37.18% and 15.97% of the total ACTs of those methods. These results show that the HC calculation-time might act as the second large components of the total ACTs of three methods in comparison with those processes mentioned above, whose calculation times have been found to be almost less than 1% of the total ACT in all methods.

Especially, the relative HC calculation-time percentage of the CH-RMC-based WRP method in its total ACT has been calculated to be 2.35-fold higher than those of the CH-RMC-based RT and NLUT methods on the average. Actually, the blank region has to be located at the far side of the hologram plane and object points are then set to be located far from the blank region, which may make the HC calculation-time of the in the WRP method much longer than those of others. As mentioned above, the area of the blank region can be determined by the rotational angles between the two consecutive frames, so that the smaller the rotational angle, the smaller the blank region becomes. Thus, as the sampling rate of the input video increases, the corresponding blank region required for the HC process gets smaller, whereas the number of total video frames also increases at the same time. Therefore, there might be a tradeoff between the size of the blank region and number of sampled video frames for the effective reduction of the HC time.

Furthermore, calculation times taken for the EC processes have been found to be 110.24s, 2.12s and 7.13s, respectively, in the CH-RMC-based RT, WRP and NLUT methods, which correspond to 83.07%, 54.36% and 80.20% of the total ACTs of those methods. These results reveal that the EC calculation-time might be the largest components of the total ACTs of three methods in comparison with all the other processes. In the EC process, error points between the actual and compensated object images are extracted, and then CGH patterns for them are calculated and added to the compensated CGH pattern. As mentioned above, those errors happen to occur due to the incorrect PE process and abrupt perspective changes of the object. Here, perspective changes of the object can be made very tiny when rotation angles between the two successive frames are kept to be small. On the other hand, most error points might occur due to the inaccuracy in those extracted parameters because the 3DS-MAX can produce only the relative depth information just by rendering a depth image, which potentially causes some errors in depth information of the 3-D object.

4.2 For the MC-NLUT, MPEG-NLUT, and 3DMC-NLUT methods

Table 2 shows experimental results on the comparative performance analysis of conventional MC-NLUT, MPEG-NLUT and 3DMC-NLUT methods, as well as the original and CH-RMC-based NLUT methods, in terms of the ANCOP and ACT for the test video scenario of Fig. 8. As discussed above, ANCOP and ACT of the CH-RMC-based NLUT have been reduced by 73.34% and 66.59%, respectively, in comparison with the original NLUT method. On the other hand, respective ANCOPs of the conventional MC-NLUT, MPEG-NLUT and 3DMC-NLUT methods have been found to be rather increased by 27.65%, 63.66% and 76.24% in comparison with the original NLUT method, which means that any reductions of the ANCOPs couldn’t be obtained in those methods for the test video scenario where a 3-D object moves in rotational motion.

Tables Icon

Table 2. ACTs and ANCOPs of conventional MC-NLUT, MPEG-NLUT, 3DMC-NLUT, and NLUT methods for test video scenario.

In addition, as seen in Table 2, resultant ACTs of the conventional MC-NLUT, MPEG-NLUT and 3DMC-NLUT methods have been found to be also increased by 29.35%, 67.43% and 75.02%, respectively, when they are compared with the original NLUT method, which reveals that linear motion estimation and compensation algorithms employed in those NLUT methods couldn’t work anymore for enhancing their computational speed for the 3-D object moving in random motion.

In short, Table 2 shows that ANCOP and ACT of the original NLUT method for the 3-D object in rotational motions could be massively reduced when the CH-RMC algorithm is employed, whereas those of the MC-NLUT, MPEG-NLUT and 3DMC-NLUT methods have been rather significantly increased. Since all those NLUT methods employing the linear-motion estimation and compensation processes are operated based on a shift-invariance property of the PFP, allowing only the x, y and z-directional motion vectors to be estimated and compensated, they seem to be very effective only for the 3-D object moving in linear motion with small depth variations. On the other hand, the proposed CH-RMC-based NLUT method operates based on a new concept of rotational invariance of the curved hologram. In this method, a rotational path of the object can be divided into a set of locally-different arced circles, and then those rotational motions of the object in each arc are properly estimated and compensated, which results in great reductions of the corresponding ANCOP and ACT. Thus, by combined use of the conventional linear motion compensation and proposed rotational motion compensation methods, most natural motions of the objects including the linear and curving motions can be effectively compensated.

Hence, the proposed CH-RMC-based NLUT has to be eventually implemented on the field programmable gate arrays (FPGAs) or graphics processing units (GPUs) for their application to real-time generation of the holographic video for the input 3-D object moving in free motion. The operational concept of the proposed method looks well matched with the software and memory structures of the commercial GPUs, thus the rotational-motion compensation process of the proposed method expects to be implemented on the commercial GPUs for its real-time application.

5. Reconstruction of the holographic 3-D video

Figures 14(a1)-(a4) and (b1)-(b4) show two sets of panoramic image sequences synthesized with 80 frames of reconstructed object images, and three images of the 21th, 51th and 80th frames, which are computationally and optically reconstructed from the holographic videos generated with the CH-RMC-based NLUT method, respectively. The fixed ‘House’ image is reconstructed at the depth of 264mm, while ‘Car’ images are reconstructed at the depth planes of 252mm, 249mm and 250mm for each of the 21st, 51st, and 80th-frames of the test video scenario, respectively. 80 frames of computationally and optically reconstructed 3-D scenes which are compressed into 2-second video files of Visualization 1 and Visualization 2, respectively, are also included in Fig. 14.

 figure: Fig. 14

Fig. 14 Computationally and optically reconstructed input images from the holographic videos generated with the CH-RMC-based NLUT method for the test video scenario (Visualization 1, Visualization 2): (a1)-(a4) Computationally reconstructed input images, (b1)-(b4) Optically reconstructed input images, (a1), (b1) Panoramic image sequences synthesized with 80 frames of reconstructed input images, (a2)-(a4), (b2)-(b4) Reconstructed input images of the 21st, 51st and 80th-frames, respectively.

Download Full Size | PPT Slide | PDF

Figures 14(a1) and (b1) show panoramic input images synthesized with 80 frames of computationally and optically reconstructed object images, respectively, which may confirm that sequential tracks of the moving objects have been found to be well matched with those of the test video scenario even though rotational-motion compensation processes have been done on every video frame. In addition, as seen in optically-reconstructed input scenes, the ‘Car’ image of the 51st-frame (Figs. 14(b3)) looks blurred more than those of the 21st and 80th-frames (Figs. 14(b2) and (b4)), which results from the fact that 3-D scenes have been optically reconstructed being focused on the fixed ‘House’ image. That is, the depth distance between the ‘Car’ and ‘House’ objects at the 51st-frame gets a little bit larger than those of the 21st and 80th-frames, which might cause the ‘Car’ image of the 51st-frame to be out of focused a little bit more.

Successful experimental results on the reconstruction of those holographic videos for the test video scenario may finally confirm the feasibility of the proposed method.

6. Conclusions

A CH-RMC method has been proposed for accelerated generation of holographic videos of a 3-D object moving on the rotational path with many locally-different arcs. Those rotational motions of the object made on every arc can be compensated just by rotating their curved holograms along the curving surfaces matched with the object’s moving trajectory, which results in great enhancements of the computational speed of the CGH algorithms. Experiments with the test video scenario reveal that ANCOPs and ACTs of the CH-RMC-based RT, WRP and NLUT have been reduced by 73.43% and 62.05%, respectively, on the average, in comparison with their original methods. Successful results on the computational and optical reconstructions of input 3-D scenes from those holographic videos may confirm the feasibility of the proposed method in the practical application fields.

Funding

National Research Foundation of Korea (NRF) (2011-0030079); Research Grant of Kwangwoon University din 2018.

References and links

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]   [PubMed]  

2. C. J. Kuo and M. H. Tsai, Three-Dimensional Holographic Imaging (John Wiley & Sons, 2002).

3. T.-C. Poon, Digital Holography and Three-dimensional Display (Springer Verlag, 2007).

4. X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4. [CrossRef]  

5. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]   [PubMed]  

6. Ag, HoloEye Photonics, “GAEA 10 Megapixel Phase Only Spatial Light Modulator (Reflective),” http://holoeye.com/spatial-light-modulators/gaea-4k-phase-only-spatial-light-modulator/.

7. M. Makowski, I. Ducin, K. Kakarenko, A. Kolodziejczyk, A. Siemion, A. Siemion, J. Suszek, M. Sypek, and D. Wojnowski, “Efficient image projection by Fourier electroholography,” Opt. Lett. 36(16), 3018–3020 (2011). [CrossRef]   [PubMed]  

8. H. Nakayama, N. Takada, Y. Ichihashi, S. Awazu, T. Shimobaba, N. Masuda, and T. Ito, “Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,” Appl. Opt. 49(31), 5993–5996 (2010). [CrossRef]  

9. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]   [PubMed]  

10. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52(1), A201–A209 (2013). [CrossRef]   [PubMed]  

11. T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013). [CrossRef]   [PubMed]  

12. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

13. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]   [PubMed]  

14. D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015). [CrossRef]   [PubMed]  

15. T. Tommasi and B. Bianco, “Frequency analysis of light diffraction between rotated planes,” Opt. Lett. 17(8), 556–558 (1992). [CrossRef]   [PubMed]  

16. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). [CrossRef]   [PubMed]  

17. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]   [PubMed]  

18. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009). [CrossRef]   [PubMed]  

19. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” in Digital Holography and Three-Dimensional Imaging, 2009 OSA Technical Digest Series (Optical Society of America, 2009), pp. DWC4.

20. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000). [CrossRef]  

21. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]   [PubMed]  

22. M.-W. Kwon, S.-C. Kim, S.-E. Yoon, Y.-S. Ho, and E.-S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015). [CrossRef]   [PubMed]  

23. M.-W. Kwon, S.-C. Kim, and E.-S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016). [CrossRef]   [PubMed]  

24. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]   [PubMed]  

25. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]   [PubMed]  

26. S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]   [PubMed]  

27. S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008). [CrossRef]   [PubMed]  

28. S.-C. Kim, X.-B. Dong, M.-W. Kwon, and E.-S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013). [CrossRef]   [PubMed]  

29. X.-B. Dong, S.-C. Kim, and E.-S. Kim, “MPEG-based novel-look-up-table method for accelerated computation of digital video holograms of three-dimensional objects in motion,” Opt. Express 22(7), 8047–8067 (2014). [CrossRef]   [PubMed]  

30. X.-B. Dong, S.-C. Kim, and E.-S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014). [CrossRef]   [PubMed]  

31. S.-F. Lin and E.-S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017). [CrossRef]   [PubMed]  

32. B. Árpád, “87.47 A Heron-type formula for the triangle,” The Mathematical Gazette 87(509), 324–326 (2003). [CrossRef]  

33. S. Oh and I. K. Jeong, “Cylindrical angular spectrum using Fourier coefficients of point light source and its application to fast hologram calculation,” Opt. Express 23(23), 29555–29564 (2015). [CrossRef]   [PubMed]  

34. B.-J. Jackin and T. Yatagai, “Fast calculation method for computer-generated cylindrical hologram based on wave propagation in spectral domain,” Opt. Express 18(25), 25546–25555 (2010). [CrossRef]   [PubMed]  

35. B.-J. Jackin and T. Yatagai, “360° reconstruction of a 3D object using cylindrical computer generated holography,” Appl. Opt. 50(34), H147–H152 (2011). [CrossRef]   [PubMed]  

36. Wikipedia, “Signal-to-noise ratio (imaging),” https://en.wikipedia.org/wiki/Signal-to-noise_ratio_(imaging).

References

  • View by:
  • |
  • |
  • |

  1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
    [Crossref] [PubMed]
  2. C. J. Kuo and M. H. Tsai, Three-Dimensional Holographic Imaging (John Wiley & Sons, 2002).
  3. T.-C. Poon, Digital Holography and Three-dimensional Display (Springer Verlag, 2007).
  4. X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.
    [Crossref]
  5. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009).
    [Crossref] [PubMed]
  6. Ag, HoloEye Photonics, “GAEA 10 Megapixel Phase Only Spatial Light Modulator (Reflective),” http://holoeye.com/spatial-light-modulators/gaea-4k-phase-only-spatial-light-modulator/ .
  7. M. Makowski, I. Ducin, K. Kakarenko, A. Kolodziejczyk, A. Siemion, A. Siemion, J. Suszek, M. Sypek, and D. Wojnowski, “Efficient image projection by Fourier electroholography,” Opt. Lett. 36(16), 3018–3020 (2011).
    [Crossref] [PubMed]
  8. H. Nakayama, N. Takada, Y. Ichihashi, S. Awazu, T. Shimobaba, N. Masuda, and T. Ito, “Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,” Appl. Opt. 49(31), 5993–5996 (2010).
    [Crossref]
  9. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011).
    [Crossref] [PubMed]
  10. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52(1), A201–A209 (2013).
    [Crossref] [PubMed]
  11. T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013).
    [Crossref] [PubMed]
  12. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
    [Crossref]
  13. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009).
    [Crossref] [PubMed]
  14. D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015).
    [Crossref] [PubMed]
  15. T. Tommasi and B. Bianco, “Frequency analysis of light diffraction between rotated planes,” Opt. Lett. 17(8), 556–558 (1992).
    [Crossref] [PubMed]
  16. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015).
    [Crossref] [PubMed]
  17. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003).
    [Crossref] [PubMed]
  18. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009).
    [Crossref] [PubMed]
  19. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” in Digital Holography and Three-Dimensional Imaging, 2009 OSA Technical Digest Series (Optical Society of America, 2009), pp. DWC4.
  20. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
    [Crossref]
  21. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
    [Crossref] [PubMed]
  22. M.-W. Kwon, S.-C. Kim, S.-E. Yoon, Y.-S. Ho, and E.-S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015).
    [Crossref] [PubMed]
  23. M.-W. Kwon, S.-C. Kim, and E.-S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016).
    [Crossref] [PubMed]
  24. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).
    [Crossref] [PubMed]
  25. S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009).
    [Crossref] [PubMed]
  26. S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
    [Crossref] [PubMed]
  27. S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008).
    [Crossref] [PubMed]
  28. S.-C. Kim, X.-B. Dong, M.-W. Kwon, and E.-S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013).
    [Crossref] [PubMed]
  29. X.-B. Dong, S.-C. Kim, and E.-S. Kim, “MPEG-based novel-look-up-table method for accelerated computation of digital video holograms of three-dimensional objects in motion,” Opt. Express 22(7), 8047–8067 (2014).
    [Crossref] [PubMed]
  30. X.-B. Dong, S.-C. Kim, and E.-S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014).
    [Crossref] [PubMed]
  31. S.-F. Lin and E.-S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017).
    [Crossref] [PubMed]
  32. B. Árpád, “87.47 A Heron-type formula for the triangle,” The Mathematical Gazette 87(509), 324–326 (2003).
    [Crossref]
  33. S. Oh and I. K. Jeong, “Cylindrical angular spectrum using Fourier coefficients of point light source and its application to fast hologram calculation,” Opt. Express 23(23), 29555–29564 (2015).
    [Crossref] [PubMed]
  34. B.-J. Jackin and T. Yatagai, “Fast calculation method for computer-generated cylindrical hologram based on wave propagation in spectral domain,” Opt. Express 18(25), 25546–25555 (2010).
    [Crossref] [PubMed]
  35. B.-J. Jackin and T. Yatagai, “360° reconstruction of a 3D object using cylindrical computer generated holography,” Appl. Opt. 50(34), H147–H152 (2011).
    [Crossref] [PubMed]
  36. Wikipedia, “Signal-to-noise ratio (imaging),” https://en.wikipedia.org/wiki/Signal-to-noise_ratio_(imaging) .

2017 (1)

2016 (1)

2015 (4)

2014 (2)

2013 (4)

2012 (1)

2011 (3)

2010 (2)

2009 (4)

2008 (2)

2003 (2)

2000 (1)

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

1993 (1)

M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

1992 (1)

1948 (1)

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

Arai, D.

Árpád, B.

B. Árpád, “87.47 A Heron-type formula for the triangle,” The Mathematical Gazette 87(509), 324–326 (2003).
[Crossref]

Awazu, S.

Bianco, B.

Cho, J.

Chong, T. C.

Dong, X.-B.

Ducin, I.

Endo, Y.

Gabor, D.

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

Hahn, J.

Hirayama, R.

Hiyama, D.

Ho, Y.-S.

Ichihashi, Y.

Ichikawa, T.

Im, D.

Ito, T.

Iwase, S.

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

Jackin, B.-J.

Jeong, I. K.

Kakarenko, K.

Kakue, T.

Kang, H.

Kim, E.-S.

S.-F. Lin and E.-S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017).
[Crossref] [PubMed]

M.-W. Kwon, S.-C. Kim, and E.-S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016).
[Crossref] [PubMed]

M.-W. Kwon, S.-C. Kim, S.-E. Yoon, Y.-S. Ho, and E.-S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015).
[Crossref] [PubMed]

X.-B. Dong, S.-C. Kim, and E.-S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014).
[Crossref] [PubMed]

X.-B. Dong, S.-C. Kim, and E.-S. Kim, “MPEG-based novel-look-up-table method for accelerated computation of digital video holograms of three-dimensional objects in motion,” Opt. Express 22(7), 8047–8067 (2014).
[Crossref] [PubMed]

S.-C. Kim, X.-B. Dong, M.-W. Kwon, and E.-S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013).
[Crossref] [PubMed]

S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
[Crossref] [PubMed]

S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009).
[Crossref] [PubMed]

S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008).
[Crossref] [PubMed]

S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).
[Crossref] [PubMed]

Kim, H.

Kim, J.-M.

Kim, S.-C.

M.-W. Kwon, S.-C. Kim, and E.-S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016).
[Crossref] [PubMed]

M.-W. Kwon, S.-C. Kim, S.-E. Yoon, Y.-S. Ho, and E.-S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015).
[Crossref] [PubMed]

X.-B. Dong, S.-C. Kim, and E.-S. Kim, “MPEG-based novel-look-up-table method for accelerated computation of digital video holograms of three-dimensional objects in motion,” Opt. Express 22(7), 8047–8067 (2014).
[Crossref] [PubMed]

X.-B. Dong, S.-C. Kim, and E.-S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014).
[Crossref] [PubMed]

S.-C. Kim, X.-B. Dong, M.-W. Kwon, and E.-S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013).
[Crossref] [PubMed]

S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
[Crossref] [PubMed]

S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009).
[Crossref] [PubMed]

S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).
[Crossref] [PubMed]

S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008).
[Crossref] [PubMed]

Kolodziejczyk, A.

Kwon, M.-W.

Lee, B.

Liang, X.

Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009).
[Crossref] [PubMed]

X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.
[Crossref]

Lin, S.-F.

Lucente, M.

M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

Lwin, P. P. M. Y.

X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.
[Crossref]

Makowski, M.

Masuda, N.

Matsushima, K.

Murano, K.

Nakayama, H.

Oh, S.

Oi, R.

Oikawa, M.

Okada, N.

Oneda, T.

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

Onural, L.

Pan, Y.

Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009).
[Crossref] [PubMed]

X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.
[Crossref]

Sakamoto, Y.

Schimmel, H.

Shimobaba, T.

Siemion, A.

Solanki, S.

Suszek, J.

Sypek, M.

Takada, N.

Tan, C.

Tanjung, R. B.

Tommasi, T.

Wakunami, K.

Wojnowski, D.

Wyrowski, F.

Xu, X.

Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009).
[Crossref] [PubMed]

X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.
[Crossref]

Yamaguchi, K.

Yamaguchi, M.

Yamamoto, K.

Yaras, F.

Yatagai, T.

Yoneyama, T.

Yoon, J.-H.

Yoon, S.-E.

Yoshikawa, H.

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

Appl. Opt. (8)

F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009).
[Crossref] [PubMed]

H. Nakayama, N. Takada, Y. Ichihashi, S. Awazu, T. Shimobaba, N. Masuda, and T. Ito, “Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,” Appl. Opt. 49(31), 5993–5996 (2010).
[Crossref]

T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52(1), A201–A209 (2013).
[Crossref] [PubMed]

M.-W. Kwon, S.-C. Kim, and E.-S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016).
[Crossref] [PubMed]

S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).
[Crossref] [PubMed]

S.-C. Kim and E.-S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009).
[Crossref] [PubMed]

S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008).
[Crossref] [PubMed]

B.-J. Jackin and T. Yatagai, “360° reconstruction of a 3D object using cylindrical computer generated holography,” Appl. Opt. 50(34), H147–H152 (2011).
[Crossref] [PubMed]

J. Electron. Imaging (1)

M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).
[Crossref]

J. Opt. Soc. Am. A (1)

Nature (1)

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

Opt. Express (14)

K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011).
[Crossref] [PubMed]

T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013).
[Crossref] [PubMed]

D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015).
[Crossref] [PubMed]

D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015).
[Crossref] [PubMed]

Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009).
[Crossref] [PubMed]

N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013).
[Crossref] [PubMed]

M.-W. Kwon, S.-C. Kim, S.-E. Yoon, Y.-S. Ho, and E.-S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015).
[Crossref] [PubMed]

S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).
[Crossref] [PubMed]

S. Oh and I. K. Jeong, “Cylindrical angular spectrum using Fourier coefficients of point light source and its application to fast hologram calculation,” Opt. Express 23(23), 29555–29564 (2015).
[Crossref] [PubMed]

B.-J. Jackin and T. Yatagai, “Fast calculation method for computer-generated cylindrical hologram based on wave propagation in spectral domain,” Opt. Express 18(25), 25546–25555 (2010).
[Crossref] [PubMed]

S.-C. Kim, X.-B. Dong, M.-W. Kwon, and E.-S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013).
[Crossref] [PubMed]

X.-B. Dong, S.-C. Kim, and E.-S. Kim, “MPEG-based novel-look-up-table method for accelerated computation of digital video holograms of three-dimensional objects in motion,” Opt. Express 22(7), 8047–8067 (2014).
[Crossref] [PubMed]

X.-B. Dong, S.-C. Kim, and E.-S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014).
[Crossref] [PubMed]

S.-F. Lin and E.-S. Kim, “Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods,” Opt. Express 25(10), 11389–11404 (2017).
[Crossref] [PubMed]

Opt. Lett. (3)

Proc. SPIE (1)

H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
[Crossref]

The Mathematical Gazette (1)

B. Árpád, “87.47 A Heron-type formula for the triangle,” The Mathematical Gazette 87(509), 324–326 (2003).
[Crossref]

Other (6)

Wikipedia, “Signal-to-noise ratio (imaging),” https://en.wikipedia.org/wiki/Signal-to-noise_ratio_(imaging) .

H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” in Digital Holography and Three-Dimensional Imaging, 2009 OSA Technical Digest Series (Optical Society of America, 2009), pp. DWC4.

Ag, HoloEye Photonics, “GAEA 10 Megapixel Phase Only Spatial Light Modulator (Reflective),” http://holoeye.com/spatial-light-modulators/gaea-4k-phase-only-spatial-light-modulator/ .

C. J. Kuo and M. H. Tsai, Three-Dimensional Holographic Imaging (John Wiley & Sons, 2002).

T.-C. Poon, Digital Holography and Three-dimensional Display (Springer Verlag, 2007).

X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.
[Crossref]

Supplementary Material (2)

NameDescription
» Visualization 1       Visualization 1
» Visualization 2       Visualization 2

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Conceptual diagram of the rotational-motion compensation process of a moving object with the curved hologram on the rotating surface.
Fig. 2
Fig. 2 Overall functional block-diagram of the proposed CH-RMC method.
Fig. 3
Fig. 3 Operational process of the proposed CH-RMC method for a 3-D car object freely- moving on the ground of the x-z plane.
Fig. 4
Fig. 4 Schematic diagram for extraction of the curving-motion parameters of the car object on the blue arc.
Fig. 5
Fig. 5 Conceptual diagram of the proposed PH-to-CH and CH-to-PH conversion processes.
Fig. 6
Fig. 6 Conceptual diagram of the rotational-motion compensation process.
Fig. 7
Fig. 7 Overall configuration of the experimental setup for digital and optical processes of the proposed CH-RMC-based system.
Fig. 8
Fig. 8 (a) Configuration of the test 3-D video scenario with 80 video frames; (b) Top-view of the test scenario.
Fig. 9
Fig. 9 Calculated CGH patterns for the 1st-frame B-OPH of the test video scenario with each of the (a) RT, (b) WRP, (c) NLUT, (d) MC-NLUT, (e) MPEG-NLUT, and (f) 3DMC-NLUT methods.
Fig. 10
Fig. 10 (a) Position-adjustment processes from the OPHs to their corresponding LPHs, (b) B-OPH1 and its position-adjusted version of the B-LPH1, (c) PH-to-CH conversion process for the blue circle case: (c-1) B-LPH1, (c-2) Conversion table of Tconv and (c-3) B-LCH1.
Fig. 11
Fig. 11 (a) Rotational-motion compensation of the ‘Car’ object with the B-LCH1 between the 1st and 2nd-frames, (b) CH-to-PH conversion process for the blue circle case: (b-1) B-LCH2, (b-2) Conversion table of Tconv and (b-3) B-LPH2.
Fig. 12
Fig. 12 Generation process of the B-OPH2: (a) Position-adjustment process from the LPHs to OPHs, (b) Generated B-OPH2 with both of the compensated and calculated hologram patterns.
Fig. 13
Fig. 13 (a) Difference images at the 2nd-frame between the (a-1) Compensated and actual images, (a-2) Error-corrected and actual images, (b) Error-correction process: (b-1) Generated B-OPH2 based on the CH-RMC method, (b-2) Calculated hologram for the error image, (b-3) Error-corrected version of the B-OPH2.
Fig. 14
Fig. 14 Computationally and optically reconstructed input images from the holographic videos generated with the CH-RMC-based NLUT method for the test video scenario (Visualization 1, Visualization 2): (a1)-(a4) Computationally reconstructed input images, (b1)-(b4) Optically reconstructed input images, (a1), (b1) Panoramic image sequences synthesized with 80 frames of reconstructed input images, (a2)-(a4), (b2)-(b4) Reconstructed input images of the 21st, 51st and 80th-frames, respectively.

Tables (2)

Tables Icon

Table 1 ACTs and ANCOPs of original and CH-RMC-based RT, WRP, and NLUT methods for test video scenario.

Tables Icon

Table 2 ACTs and ANCOPs of conventional MC-NLUT, MPEG-NLUT, 3DMC-NLUT, and NLUT methods for test video scenario.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

H( r 2 , θ 2 )=H( r 2 , θ 1 + θ )
θ 11 = θ A + θ B
θ A =arccos( h O 1 P 11 )
θ B =arccos( h O 1 P 12 )
B-LC H 1 = y=1 L x=1 M B-LP H 1 exp( ik z (x,y) )
W=ztan( α m )=ztan[arcsin( λ 2p )]
T conv (x,y)=exp( ik z (x,y) )
z (x,y) = z 0 R+ [ R 2 ( w h 2 x) 2 ]
z 0 =R R 2 ( w h 2 ) 2
A BLC H 1 (x,y)= A BLP H 1 (x,y) T conv (x,y)
T ' conv (x,y)=exp( ik(- z (x,y) ) )
T ' conv (x,y)=1/exp( ik z (x,y) )=1/ T conv (x,y)
A BLP H 2 (x,y)= A B-LC H 2 (x,y)T ' conv (x,y) = A BLC H 2 (x,y)/ T conv (x,y)
SNR=10 log 10 { x=1 P x y=1 P y N (x,y) 2 / x=1 P x y=1 P y [ N(x,y)M(x,y) ] 2 }

Metrics