Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Faster generation of holographic videos of objects moving in space using a spherical hologram-based 3-D rotational motion compensation scheme

Open Access Open Access

Abstract

A spherical hologram-based three-dimensional rotational-motion compensation (SH-3DRMC) method is proposed for the accelerated generation of holographic videos of a three-dimensional (3-D) object moving in space along the arbitrary trajectory with many locally-different curvatures. All those 3-D rotational motions of the object made on each arc can be compensated just by rotating their local spherical holograms along the spherical surfaces matched with the object’s moving trajectory using the estimated rotation-axes and angles, which enables a massive reduction of computational complexity of the conventional hologram-generation algorithm and results in an accelerated calculation of holographic videos. Experiments with a test video show that the average calculation times of the conventional NLUT, WRP and 1-D NLUT methods employing the proposed SH-3DRMC scheme have been noticeably reduced by 34.75%, 41.37% and 31.64%, respectively, in comparison with those of their original methods. These good experimental results confirm the feasibility of the proposed system.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Thus far, the electro-holographic display based on computer-generated holograms (CGHs) has been suffered from a couple of critical problems in its practical application [15]. One of them is the unavailability of the large-scale high-resolution spatial-light-modulators (SLMs) to reconstruct the holographic data into the 3-D videos because the hologram resolution is in the order of the light wavelength [6]. The other one is the computational complexity in generating the holographic videos in real-time [7,8]. Thus, a lot of research works in electro- holographic displays have been focused on the development of the fast CGH algorithms [934].

For this, many kinds of CGH algorithms have been proposed, which include the classical ray-tracing (RT) method [911], look-up table (LUT) [12], novel look-up table (NLUT) [29], wave-front recording plane (WRP)-based [1315], polygon-based [16,17], SLUT [18], image hologram-based [19], recurrence relation-based [20], double-step Fresnel diffraction (DSF)-based [21], FPGA-based [22,23], sparse-based [2426], warping-based [27], accelerated point-based Fresnel diffraction [28], and GPU-based [3032] methods.

Among them, the NLUT was presented as one of the accelerated CGH algorithms [29]. In this method, a 3-D object is approximated as a set of discretely-sliced image planes with their own depths, and only the fringe patterns of the center-located object points on each image plane, which are called principal-fringe-patterns (PFPs), are pre-calculated and stored. Fringe patterns for the other object points on each image plane are then generated just by shifting and adding of their corresponding PFPs without any additional calculation based on a unique shift-invariant property of the PFP.

Here it must be noted that the NLUT usually generates the CGH patterns of the 3-D object in motion based on a two-step process such as the pre- and main-processes [3136]. In the pre-processing, the number of object points to be calculated is to be minimized just by removing as much of redundant object data between the consecutive 3-D video frames as possible by employing various motion estimation and compensation algorithms used in the conventional digital communication system [4,19]. This data compression operation of the NLUT method can be carried out based on the shift-invariance property of the PFP. In the following main-processing, CGH patterns only for those compressed object data are to be calculated with repetitive shifting and adding operations of their corresponding PFPs [31,33]. Thus, the computational speed of the NLUT method can be enhanced not only by reducing the number of calculated object points in the pre-processing, but also by shortening the CGH calculation time in the main-processing.

There are several types of NLUTs employing this pre-processing for eliminating the temporal redundancy between the two consecutive video frames, which include the temporal redundancy-based NLUT(TR-NLUT) [33], motion compensation-based NLUT(MC-NLUT) [32], MPEG-based NLUT(MPEG-NLUT) [35] and three-directional motion compensation- based NLUT(3D-MC-NLUT) [36] methods, as well as the full-scale one-dimensional NLUT (1-D NLUT) method enabling the faster generation of holographic videos with the minimum memory capacity [37]. It is certain that the computational speed of these NLUT methods has been greatly enhanced, but they still have operational limitations in their practical applications. As mentioned above, all those NLUT methods employing the motion estimation and compensation process can be operated based on the shift-invariance property of the PFP [3436]. This property, however, allows only the x, y and z-directional motion vectors to be estimated and compensated. Thus, these methods might be effective in situations where 3-D objects move in linear motion with small depth variations [3436].

However, in the real world, 3-D objects usually move in random motions with many different curvatures on the ground or space, which cannot be compensated with the traditional NLUT-based motion compensation methods. For dealing with the rotational motion, a simple rotational transformation technique was formulated [38]. The rotation of a 3-D object in space always brings new information in the following frames due to the perspective changes of the object. But those data happen to be lost in the rotational transformation technique. In addition, a global motion compensation method was also presented for the compression of holographic videos [39].

But, as a feasible approach for fast generation of the holographic videos of the 3-D object randomly moving with many curvatures, the curved hologram-based rotational motion compensation (CH-RMC) method based on a concept of rotation-invariance of the curved hologram was proposed [40,41]. In this method, rotational motions of the 3-D object made on every arc can be compensated just by rotating their local curved holograms on the curving surfaces matched with the trajectory on which the object moves. Thus, with this CH-RMC algorithm, most hologram patterns of the 3-D object in rotational motion can be generated without their direct calculation processes, which results in a dramatic reduction of the overall calculation time of the holographic video. Nevertheless, this CH-RMC method can be only applied to the 3-D object moving on the ground since the local curved holograms with the cylindrical forms are to be aligned perpendicular to the ground, and additional complex operations are required for the 3-D rotation case, which then increase its time cost and limit the efficiency of its rotational compensation. It means only the two-dimensional (2-D) rotational motions just on the ground can be properly compensated with this method.

Thus, in this paper, as a new approach for accelerated generation of the holographic videos of the 3-D object in rotational-motion in space, the spherical hologram-based 3-D rotational-motion compensation (SH-3DRMC) scheme is proposed. In this method, spherical forms of the local holograms are employed for compensating the object’s rotational motions on each curvature. Here, motion compensations can be carried out just by rotating the local spherical holograms along the spherical surfaces matched with the moving trajectory of the object with the estimated rotation-axes and angles between the two successive frames. Thus, most hologram patterns of the previous frames can be reused to generate those of the current frames, which results in a significant reduction of the overall CGH calculation time of the 3-D object moving in space. Here must be noted that this proposed scheme can be applied to any conventional CGH algorithm to enhance its computational speed, which would be its most important feature.

To confirm the feasibility of the proposed SH-3DRMC method, the operational principle of the proposed method is analyzed based on wave-optics, and experiments with a test 3-D object in rotational-motion in space are carried out with three conventional CGH algorithms of NLUT, WRP and 1-D NLUT. Operational performances of those NLUT, WRP and 1-D NLUT employing the SH-3DRMC scheme are then discussed in terms of the computational speed in comparison with those of their original versions.

2. Proposed method

Figure 1 shows an overall functional diagram of the proposed SH-3DRMC method, which is composed of a four-step process. Here, a 3-D object of the ‘Airplane’ is assumed to move along the curved pathway with three locally-different arcs in space, where coordinates of the rotational axes and angles in each local arc can be estimated.

 figure: Fig. 1.

Fig. 1. Overall functional block-diagram of the proposed method.

Download Full Size | PDF

Initially, just by applying a segmentation strategy to the randomly-moving trajectory of the 3-D object in space, the object’s trajectory can be divided into a set of local spherical arcs with different radii, and then each spherical arc with its radius can be compensated just by using their local spherical holograms (LSHs) that are transformed from the local planar holograms (LPHs), where the LPHs can be obtained just by propagating the original planar holograms (OPHs). As seen in Fig. 2, the object flies from the location of A1 (x1, y1, z1) to the location of A4 (x4, y4, z4) in space along the rotational pathway, where the moving trajectory of the object can be decomposed into three kinds of local arcs of A1-A2, A2-A3 and A3-A4, which are highlighted with three colors of blue (B), red (R) and green (G), respectively. Here, those rotational motions on each arc can be compensated with their local spherical holograms (LSHs), which are denoted here as the B-LSH, R-LSH and G-LSH, respectively. In addition, local planar holograms (LPHs) for each of those three arcs are also denoted here as the B-LPH, R-LPH and G-LPH, respectively, and located just behind each of their corresponding B-LSH, R-LSH, and B-LSH whose centers are identical with those local spherical arcs on which the object moves, and radii are same for all LSHs.

 figure: Fig. 2.

Fig. 2. Operational process of the proposed method for a 3-D object moving in space along the rotational pathway with three locally-different curvatures.

Download Full Size | PDF

For the case of the blue spherical arc in Fig. 2, the 1st-frame hologram pattern of the object, which is called the 1st-frame B-original plane hologram (B-OPH1), is initially calculated with one of the conventional CGH algorithms. Then, in the second step, rotational motions of the object in this arc are estimated and evaluated to extract the rotational parameters such as the rotational axes and angles between the two successive frames. In the third step, 3-D rotational motion compensation processes are sequentially carried out between the two consecutive frames, which are divided into six sub-processes, such as 1) propagation of the B-OPH1 to the B-LPH1, 2) Conversion of the B-LPH1 into the B-LSH1, 3) rotation of the B-LSH1 with the estimated rotational axis and angle, 4) conversion of the rotated B-LSH1 into that of the B-LPH1, 5) calculation of the hologram pattern for the blank region of the estimated B-LPH1, and 6) propagation of the rotated B-LPH1 into that of the B-OPH1, where the rotated B-OPH1 actually corresponds to the estimated 2nd-frame B-OPH (B-OPH2) obtained from the 3-D rotational motion compensation process between the 1st and 2nd frames. In the final step, differences between the estimated and actual B-OPH2 due to the erroneous points between the estimated and actual object images are corrected.

2.1 Generation of the 1st-frame B-OPH

In fact, the 1st-frame B-OPH (B-OPH1) of the object can be generated with one of the conventional CGH algorithms. With the CGH algorithm, hologram patterns denoted as U(x, y), can be calculated based on the Fresnel-diffraction equation of Eq. (1).

$$U(x,y) = \frac{{\exp (i\frac{{2\pi }}{\lambda }z)}}{{i\lambda z}}\int\!\!\!\int {u({x_o},{y_o})\exp (i\frac{\pi }{{\lambda z}}({{(x - {x_o})}^2} + {{(y - {y_o})}^2}))} d{x_o}d{y_o}, $$
Where u(xo, yo), λ and z represent the image intensity, wavelength of the recording light and distance between the image and hologram planes, respectively.

2.2 Estimation of three-dimensional rotational-motion parameters of the 3-D object

Figure 3 shows how to determine several parameters involved in 3-D rotational-motion compensation for the case of the blue spherical arc, which include the rotational axis and angle between the two successive frames of the 3-D image, and coordinates of the sphere on which the blue arc is located. The rotational matrix can be then obtained with these estimated rotational axis and angle, and with which the B-LSH and 3-D image of the current frame are to be rotated to compensate the rotational motion of the object between the two successive frames. From these rotational motion-compensated B-LSH and 3-D image of the current frame, those of the following frame can be estimated without its direct calculation process.

 figure: Fig. 3.

Fig. 3. Schematic diagram for extracting the rotational motion parameters of the object moving along the blue arc in space for its 3-D rotational motion compensation.

Download Full Size | PDF

In addition, the coordinates of the B-LPH1, as another important parameter, need to be calculated after the center coordinates of the sphere is determined. In fact, the B-LPH1, has the same x and y-coordinates with the center of the local sphere since the B-LPH1 is just located in front of the center of the local sphere, and the z-coordinate of the B-LPH1 need to be individually calculated according to the geometry relation between the B-LPH1 and fixed local blue sphere, which is to be explained in detail below.

First of all, the center coordinates of the sphere is determined before calculating other parameters. As seen in Fig. 3(b), with four points located on the blue spherical arc, the center coordinates of the local blue sphere can be calculated. That is, four equations, which are given by Eq. (2), can be derived just by putting the coordinate values of those four points into the normal sphere function.

$$\left\{ \begin{array}{l} {({x_{1 - 0}} - {x_1})^2} + {({y_{1 - 0}} - {y_1})^2} + {({z_{1 - 0}} - {z_1})^2} = {r_1}\\ {({x_{1 - 1}} - {x_1})^2} + {({y_{1 - 1}} - {y_1})^2} + {({z_{1 - 1}} - {z_1})^2} = {r_1}\\ {({x_{1 - 2}} - {x_1})^2} + {({y_{1 - 2}} - {y_1})^2} + {({z_{1 - 2}} - {z_1})^2} = {r_1}\\ {({x_{2 - 0}} - {x_1})^2} + {({y_{2 - 0}} - {y_1})^2} + {({z_{2 - 0}} - {z_1})^2} = {r_1} \end{array} \right., $$
Where r1 denotes the radius of the local blue sphere of O1. The coordinate values of O1(x1, y1, z1) can be obtained just by solving those equations of Eq. (2). Here it must be noted that this calculation process of the center coordinates of the local sphere is required only when the object just moves into the different arcs such as the blue, red and green arcs located on their corresponding spheres. After the center coordinates of the blue sphere is obtained, the rotational axis and angle between the two points of A1−0(x1−0, y1−0, z1−0) and A1−1(x1−1, y1−1, z1−1), which are denoted as k(xk, yk, zk) and θ1−1, respectively, can be calculated from Eqs. (3)–(5).
$${\theta _{1 - 1}} = \arccos (\frac{{{O_1}{A_{1 - 0}} \cdot {O_1}{A_{1 - 1}}}}{{|{{O_1}{A_{1 - 0}}} ||{{O_1}{A_{1 - 1}}} |}}),$$
Where O1A1−0O1A1−1, |O1A1−0| and |O1A1−1| represent the dot-product and lengths of two vectors of O1A1−0 and O1A1−1, respectively. Then, due to the perpendicular relationship between the rotation-axis and plane composed of O1A1−0 and O1A1−1, the cross-product of the two vectors of O1A1−0 and O1A1−1 can be given by Eq. (4).
$${O_1}{A_{1 - 0}} \times {O_1}{A_{1 - 1}} = ({y_{1 - 0}}{z_{1 - 1}} - {z_{1 - 0}}{y_{1 - 1}})i + ({z_{1 - 0}}{x_{1 - 1}} - {x_{1 - 0}}{z_{1 - 1}})j + ({x_{1 - 0}}{y_{1 - 1}} - {y_{1 - 0}}{x_{1 - 1}})k$$
Thus, the rotation-axis k(xk, yk, zk) can be given by Eq. (5) as follows.
$$\left( \begin{array}{l} {x_k}\\ {y_k}\\ {z_k} \end{array} \right) = \left( \begin{array}{l} {y_{1 - 0}}{z_{1 - 1}} - {z_{1 - 0}}{y_{1 - 1}}\\ {z_{1 - 0}}{x_{1 - 1}} - {x_{1 - 0}}{z_{1 - 1}}\\ {x_{1 - 0}}{y_{1 - 1}} - {y_{1 - 0}}{x_{1 - 1}} \end{array} \right),$$
Here, the rotational matrix can be derived according to the Rodrigues’s rotation formula as defined in Eqs. (6) and (7) [41],
$${R_M} = I + (\sin {\theta _{1 - 1}})K + (1 - \cos {\theta _{1 - 1}}){K^2},$$
$$K = \left[ \begin{matrix} 0 &-{z_k}&{y_k}\\ {z_k}&\textrm{0}&-{x_k}\\ - {y_k}&{x_k}&\textrm{0} \end{matrix} \right],$$
Where K denotes the cross-product matrix of the rotational axis of k(xk, yk, zk). The final parameter involved in the 3-D rotational motion compensation is the coordinate values of the B-LPH1, where the B-LPH1 always locates in front of the center of its local blue sphere of O1 as seen in Fig. 3(c). The radius of the B-LSH1 is fixed, and symbolized as R, as well as the distance between the centers of the B-LPH1 and B-LSH1 is denoted as d. Then, the coordinate values of the B-LPH1, which is denoted as PB-LPH1 (xB-LPH1, yB-LPH1, zB-LPH1), can be calculated from Eq. (8).
$$\left( \begin{array}{l} {x_{B - LPH1}}\\ {y_{B - LPH1}}\\ {z_{B - LPH1}} \end{array} \right) = \left( \begin{array}{l} {x_1}\\ {y_1}\\ {z_1} - R + d \end{array} \right),$$

2.3 3-D rotational motion compensation of the object with the B-LSH

As mentioned above, the 3-D rotational motion compensation of the object between the two consecutive frames can be carried out just by performing the following six sub-processes.

2.3.1 Propagation of the B-OPH1 to the B-LPH1

As seen in Fig. 4, the B-OPH1 can be directly propagated to the B-LPH1 since the plane of B-LPH1 is in parallel with that of the B-OPH1 just with different depth. As illustrated in section 2.2, the coordinates of the B-LPH1 can be calculated according to the position of the center point of the local blue spherical arc, thus relative displacements along the x, y and z-axis can be directly calculated with the coordinates of the B-LPH1 minus those of the B-OPH1, Therefore, the propagation of the B-OPH1 to the B-LPH1 can be carried out based on the angular propagation equation of Eq. (9) considering the horizontal, vertical and longitudinal distances between those two wavefronts.

$$\begin{aligned}&{U_{B - LP{H_1}}}\\ & \quad = {{\cal F}^{ - 1}}\left\{ {{\cal F}({U_{B - OP{H_1}}}) \cdot {\cal F}\left\{ {{\raise0.7ex\hbox{${{e^{i2\pi /\lambda {z_{B - LP{H_1}}}}}}$} \!\mathord{\left/ {\vphantom {{{e^{i2\pi /\lambda {z_{B - LP{H_1}}}}}} {i\lambda {z_{B - LP{H_1}}}}}} \right.}\!\lower0.7ex\hbox{${i\lambda {z_{B - LP{H_1}}}}$}}\exp \left[ {i\frac{\pi }{{\lambda {z_{B - LP{H_1}}}}}({{x^2}_{B - LP{H_1}} + {y^2}_{B - LP{H_1}}} )} \right]} \right\}} \right\},\end{aligned}$$
Where λ and ZB-LPH1 denote the wavelength and z-coordinate of the B-LPH1, as well as UB-LPH1 and UB-OPH1, and ${\cal F}$ and ${{\cal F}^{ - 1}}$ represent the complex amplitudes of the B-LPH1 and B-OPH1, and Fourier and inverse-Fourier transforms, respectively.

 figure: Fig. 4.

Fig. 4. Schematic diagram of the propagation process between the B-OPH and B-LPH.

Download Full Size | PDF

2.3.2 Conversion of the B-LPH1 into the B-LSH1

Figure 5 shows a conceptual diagram of the proposed B-LPH1-to-B-LSH1 conversion process. The B-LSH1 is just located in front of the B-LPH1 and each pixel of the B-LPH1 corresponds to that of the B-LSH1, where width and height of the B-LPH1 are denoted as w and h, as well as pixel pitch and diffraction angle are denoted as p and α, respectively. Then, the maximum transverse distance of the diffraction from the single pixel of the B-LPH1 can be calculated with Eq. (10).

$$W = 2z\tan (\alpha ) = 2z\tan [\arcsin \frac{\lambda }{{2p}}],$$
Where the value of W should be equal to 2p in order to guarantee that the diffraction region of each pixel can transverse the width of only one pixel in the proposed method. Thus, the conversion of the B-LPH1 into the B-LSH1 can be accomplished just by being multiplied with the conversion table of Eq. (11).
$${T_{conv}}(x,y) = \exp (ik{z_{(x,y)}}),$$
Where z(x, y), which represents the propagation distance from a pixel of the B-LPH1 to the B-LSH1, can be given by Eq. (12).
$${z_{(x,y)}} = d - R + \sqrt {{R^2} - ({{(\frac{w}{2} - x)}^2} + {{(\frac{h}{2} - y)}^2})} ,$$
The maximum distance of d between the pixels of the B-LPH1 to its corresponding B-LSH1 can be also calculated with Eq. (13).
$$d = R - \sqrt {{R^2} - ({{(\frac{w}{2})}^2} + {{(\frac{h}{2})}^2})} ,$$
Thus, the conversion between the B-LPH1 and B-LSH1 can be given by Eq. (14).
$${U_{LS{H_1}}}(x,y) = {U_{LP{H_1}}}(x,y) \cdot {T_{conv}}(x,y),$$
Where ULSH1 and ULPH1 represent the complex amplitudes of the B-LSH1 and B-LPH1, respectively.

 figure: Fig. 5.

Fig. 5. Conceptual diagram of the proposed B-LPH1-to-B-LSH1 conversion process.

Download Full Size | PDF

2.3.3 Rotation of the B-LSH1 with the estimated rotational axis and angle

As seen in Fig. 6(a), on the blue spherical arc, the object moves from A1−0 to A1−1 between the 1st and 2nd frames with the rotational axis and angle of k and θ1−1. Just by rotating the B-LSH1 with these estimated rotational axis and angle, the rotated version of the B-LSH1 can be obtained.

 figure: Fig. 6.

Fig. 6. Schematic diagram of the 3-D rotational-motion compensation process.

Download Full Size | PDF

Then, the rotated B-LSH1(R-B-LSH1) can be obtained from the B-LSH1 just by being multiplied with the rotational matrix of RM as seen in Eq. (15).

$${U_{R - B - LS{H_1}}} = {U_{B - LS{H_1}}} \cdot R{}_M,$$
Where UB-LSH1 and UR-B-LSH1 represent the complex amplitudes of the B-LSH1 and R-B-LSH1, respectively. As seen in Fig. 6(a), the UB-LSH1 is rotated with the rotational axis and angle of k and θ1−1 just by being multiplied with the RM, from which the UR-B-LSH1 can be then obtained. Here the R-B-LSH1 happens to be composed of overlapped, non-overlapped and meaningless areas as seen in Fig. 6(b), where the overlapped area is to be used as most part of the estimated B-LSH of the 2nd frame, whereas the small part of the non-overlapped area called a blank area, is to be filled up, and the meaningless area is to be thrown away.

2.3.4 Conversion of the rotated B-LSH1 into its corresponding version of the B-LPH1

Now, the rotated B-LSH1, which is equivalent to the 3-D rotational motion-compensated version of the B-LSH1, is converted back into its rotated version of the B-LPH1, which can be derived from the rotated B-LSH1 just by being divided by the conversion table of Eq. (11) as seen in Eq. (16). Here, multiplication and division processes of the conversion table are to be involved in LPH-to-LSH and LSH-to-LPH conversions, respectively, which are given by Eqs. (1719).

$${U_{R - B - LP{H_1}}} = {U_{R - B - LS{H_1}}}/{T_{CONV}},$$
$${T^\sim }_{CONV}(x,y) = \exp (ik( - {z_{(x,y)}})),$$
$${T^\sim }_{CONV}(x,y) = 1/\exp (ik{z_{(x,y)}}) = 1/{T_{CONV}}(x,y), $$
$${U_{R - B - LP{H_1}}}(x,y) = {U_{R - B - LS{H_1}}}(x,y) \cdot {T^\sim }_{CONV}(x,y) = {U_{R - B - LS{H_1}}}(x,y)/{T_{CONV}}(x,y), $$
Where UR-B-LSH1 and UR-B-LPH1 represent the complex amplitudes of the rotated B-LSH1 and B-LPH1, respectively.

2.3.5 Calculation of the hologram pattern for the blank region of the estimated B-OPH2

As mentioned above, the rotated version of the B-LSH1 happens to consist of the overlapped and non-overlapped regions due to the rotation operation as seen in Fig. 6(b), which means that the estimated B-LPH2 has the blank region to be filled up together with the overlapped region to be reused. Thus, the hologram pattern for the blank region of the estimated B-LPH2 needs to be calculated with one of the conventional CGH algorithms.

Here, the size of the blank region seems to be a critical factor to determine the operational performance of the proposed 3-D rotational-motion compensation method because the additional CGH calculation time required for the blank region looks closely related to the size of the blank region. In fact, the size of the blank region may be determined by the rotational angle and radius of the B-LSH1. For example, if the object rotates with an angle of θ1−1 along the spherical arc path, the B-LSH1 is also rotated simultaneously with the same angle, and the length of the arc path where the B-LSH1 is rotated, can be calculated with the arc length formula given by Eq. (20),

$$L = {\theta _{1 - 1}} \cdot \pi \cdot R/180, $$
Where L and R represent the length of the arc where the B-LSH1 is rotated, and radius of the B-LSH1. Here the width of the blank region becomes approximately equal to be L. Thus, as the rotation-angle becomes smaller, the corresponding size of the blank region can be decreased.

2.3.6 Propagation of the rotated B-LPH1 to its corresponding version of the B-OPH1

As seen in Fig. 4, the propagation of the rotated B-LPH1 to its corresponding version of the B-OPH1 is same with the inverse-transformation process of the B-OPH1 to its corresponding version of the B-LPH1 mentioned at section 2.3.1, which can be described by Eq. (21).

$$\begin{aligned} {U_{R - B - OP{H_1}}} & = {{\cal F}^{ - 1}}\left\{ {\cal F}({U_{R - B - LP{H_1}}}) \cdot {\cal F}\left\{ {\raise0.7ex\hbox{${{e^{ik{z_{R - B - LP{H_1}}}}}}$} \!\mathord{\left/ {\vphantom {{{e^{ik{z_{R - B - LP{H_1}}}}}} {i\lambda {z_{R - B - LP{H_1}}}}}} \right.}\!\lower0.7ex\hbox{${i\lambda {z_{R - B - LP{H_1}}}}$}}\right.\right.\\ & \qquad\qquad\qquad \left.\left.\exp \left[ {i\frac{\pi }{{\lambda {z_{R - B - LP{H_1}}}}}({{x^2}_{R - B - LP{H_1}} + {y^2}_{R - B - LP{H_1}}} )} \right]\right\}\right\}, \\& = {U_{EstimatedB - OP{H_2}}} \end{aligned}$$
Here, it must be noted that the rotated version of the B-OPH1 obtained from the proposed 3-D rotational-motion compensation process between the 1st and 2nd frames, corresponds to the estimated version of the 2nd frame B-OPH, which is called as the estimated B-OPH2 here.

2.4 Error correction process

The estimated motion parameters may not be accurate due to the errors in motion estimation, which causes the compensated hologram not to be perfectly matched with the real hologram of the 2nd-frame. Thus, the motion errors are to be evaluated by calculating the difference between the actual 2nd-frame and compensated object images which can be obtained by rotating the 1st-frame object image with the estimated motion parameter. However, the difference between the estimated and actual object images of the 2nd-frame sometimes are so small that can be ignored. For the quantitative evaluation of the difference between the estimated and actual object images, a parameter of the SNR is employed, which is defined as Eq. (22),

$$SNR = 10{\log _{10}}\left\{ {{{\sum\limits_{x = 1}^M {\sum\limits_{y = 1}^N {{\tilde{u} }{{(x,y)}^2}} } } \mathord{\left/ {\vphantom {{\sum\limits_{x = 1}^M {\sum\limits_{y = 1}^N {{\tilde{u} }{{(x,y)}^2}} } } {\sum\limits_{x = 1}^M {\sum\limits_{y = 1}^N {{{[{{\tilde{u} }(x,y) - {u_2}(x,y)} ]}^2}} } }}} \right. } {\sum\limits_{x = 1}^M {\sum\limits_{y = 1}^N {{{[{{\tilde{u} }(x,y) - {u_2}(x,y)} ]}^2}} } }}} \right\},$$
Where N and M denotes the width and length of the image, and ũ(x, y) and u(x, y) represent the pixels to be compared with the estimated and actual images of the 2nd-frame, respectively. ũ(x, y) can be obtained just by rotating the 1st-frame image with the rotation matrix RM as given by Eq. (23).
$${\tilde{u} } = {u_1} \cdot {R_M}, $$
Thus, the similarity between the estimated and actual images of the 2nd-frame can be evaluated with the SNR value.

3. Experiments and results

3.1 Overall configuration of the experimental setup

Figure 7 shows an overall experiment setup of the proposed system, which is composed of digital and optical processes. In the digital process, as seen in Fig. 7(a), hologram patterns for the input 3-D scene of the 1st-frame are initially generated with three kinds of conventional CGH algorithms such as the NLUT, WRP and 1-D NLUT methods, and the other hologram patterns of the following frames are generated based on the proposed SH-3DRMC method. In addition, for the comparative performance analysis, not only the original NLUT, WRP and 1-D NLUT methods, but also those employing the conventional CH-RMC method, are also used for generation of the holographic video for the input 3-D scene. The code is under Matlab 2017 implemented on the personal computer with the CPU of 3.00GHz clock frequency and 64.0GB memory.

 figure: Fig. 7.

Fig. 7. Experimental setup of the proposed system composed of digital and optical processes.

Download Full Size | PDF

In the optical process as seen in Fig. 7(b), a green laser (G-laser) (StradusTM532 VORTRAN Laser Technology) is used as the light source, which is collimated and expanded by the laser collimator (LC) and beam expander (BE) (Model: HB-4XAR.14, Newport), then illuminated onto the SLM (spatial light modulator) where the calculated hologram pattern is loaded through a beam splitter (BL). In the experiments, a reflection-type amplitude-modulation mode SLM (Model: HOLOEYE LC-R-1080) with the resolution of 1920×1200 pixels and pixel-pitch of 8.1µm is employed and the off-axis reconstructed image is captured with the CCD camera (Thorlabs (1280*1024)).

In the experiment, an input 3-D scene with 30 frames of video images, where a ‘Airplane’ object moves around the fixed ‘Sphere’ along the curved pathway with a local arc, is employed as the test video scenario, and generated with the 3DS MAX, which is shown in Fig. 8. Every input 3-D image has 256 depth planes where each depth plane is composed of 1280×720 pixels. In addition, sampling rates on the x-y plane and along the z-direction are all set to be 0.1mm. As seen in Fig. 8, a test 3-D object of the ‘Airplane’ is assumed to fly from the location of A1−1(6.63cm, −6.42cm, −5.46cm) to the locations of A1−30(−9.07 cm, 3.65 cm, −5.17 cm) with the arbitrary 3-D moving trajectory in space, where the top, front and left views of the moving trajectory of the object in space are shown in Fig. 8(a), (b) and (c), respectively.

 figure: Fig. 8.

Fig. 8. Configuration of the 3-D moving trajectory of the ‘Airplane’ on the test video scenario with 30 video frames: (a) Top view, (b) Front view and (c) Left view.

Download Full Size | PDF

3.2 Generation of the 1st-frame OPH

In the experiment, the conventional NLUT, WRP and 1-D NLUT methods are employed to calculate the 1st-frame OPH (OPH1) of the input 3-D scene where two kinds of hologram patterns corresponding to the ‘Sphere’ and ‘Airplane’ objects are separately calculated, and only the hologram pattern for the moving object of the ‘Airplane’ is involved in the 3-D rotational-motion compensation process to generate those hologram patterns of the following frames. Here, the resolution and pixel size of the hologram pattern are set to be 1920×1200 pixels and 8.1µm, respectively, which are same with those of the SLM, and the OPH is set to be located on (0, 0, −40cm).

In the NLUT-based system, 256 numbers of PFPs for each depth plane of the input 3-D scene are stored in advance, where only the PFPs for the centered object points on each depth plane are pre-calculated, and PFPs for the other object points on the same depth plane are obtained just by shifting those PFPs. Then, CGH patterns of the ‘Airplane’ object are calculated just by multiplying each of the object intensities to its corresponding PFPs and adding them together.

On the other hand, in the 1-D NLUT-based system, only a pair of half-sized 1-D B-PFP and DC-PFP are pre-calculated and stored based on the concentric-symmetry property of the PFP, and all depth planes are obtained from this pair of half-sized 1-D PFPs based on its thin-lens property, which enables minimization of the required memory size down to a few KB regardless of the number of depth planes. Moreover, all CGH calculations in in this method are fully one-dimensionally performed based on its shift invariance property, which also enables minimization of its overall hologram calculation time. In the WRP method, a two-step operation is required. In the first step, the complex amplitudes of the 3-D object on a wavefront recording plane (WRP) are calculated, where the WRP acts as a virtual plane placed between the object and hologram pattern. In the second step, the CGH pattern is generated just by calculating the diffracted patterns from the WRP to CGH planes based on the Fresnel diffraction.

3.3 Estimation of 3-D rotational-motion parameters of the object

Above all, the center of the local sphere where the moving trajectory of the 3-D object is located, is calculated by taking four points belonging to the trajectory arc into the sphere function set of Eq. (2), which is denoted as O1. Here, four points of A1−0(x1−0, y1−0, z1−0), A1−1(x1−1, y1−1, z1−1), A1−2(x1−2, y1−2, z1−2) and A1−3(x1−3, y1−3, z1−3) with their coordinate values of x1−0=6.63cm, y1−0=−6.42cm, z1−0=−5.46cm, x1−1=6.26cm, y1−1=−6.10cm, z1−1=−5.76cm, x1−2=5.61cm, y1−2=−5.77cm, z1−2=−6.03cm, x1−3=5.09cm, y1−3=−6.28cm, z1−3=−5.43cm, respectively, are used to solve this function set, and the center coordinates of the local sphere is estimated to be O1(0, 0, 12.27cm). Two other critical parameters required for the 3-D rotational-motion compensation process are the rotational axes and angles between the two successive frames, which are used for rotating the LSHs to carry out the 3-D rotational-motion compensation.

The rotational angle between the two consecutive frames, such as the 1st and 2nd-frames can be estimated just by substituting two vectors composed of this rotation-angle into Eq. (3), where two vectors represented by O1A1−0 and O1A1−1 for the 1st and 2nd-frames, are given by (6.63cm, −6.42cm, −17.73cm) and (6.13cm, −6.10cm, −18.03cm), respectively. The rotational angle between the 1st and 2nd-frames, which is represented by θ1−1, is then calculated to be about 0.03rad. In addition, the rotational axis can be obtained with Eq. (4). Here, the rotational axis of k(xk, yk, zk) must have the same direction with that of the cross-product of the O1A1−0 and O1A1−1, thus it can be calculated just by calculating the cross-product of those two vectors of O1A1−0 and O1A1−1. The coordinate values of the rotational axis, k(xk, yk, zk) is calculated to be (0.60cm, 0.97cm, −0.69cm) from Eq. (5). Then, the rotational matrix for transforming the 3-D image from the 1st-frame (A1) to the 2nd frame (A2) can be calculated according to the matrix notation of the Rodrigues’ rotation formula of Eq. (6).

3.4 3-D rotational-motion compensation based on the LSH

As mentioned above, the 3-D rotational-motion compensation can be carried out with a six-step process. First, as seen in Fig. 9(a), the OPH1 is directly propagated to the LPH1 based on Eq. (9), where xB-LPH, yB-LPH, zB-LPH are estimated with the coordinate values of O1 of Eq. (8). For the test video scenario, those coordinate values are calculated to be xB-LPH=x1=0, yB-LPH=y1 = 0 and zB-LPH=z1-(R-d)=−2.70 cm, where R-d is estimated to be 14.972 cm from Eq. (13) under the condition of R = 15 cm.

 figure: Fig. 9.

Fig. 9. Schematic diagram of the (a) 1th process of the propagation of the OPH1 to the LPH1 and (b) 2nd process of transformation of the LPH1 into the LSH1.

Download Full Size | PDF

Second, LPH1 is transformed into its corresponding LSH1 simply by being multiplied with the conversion table of Eq. (14), where distances between each pixel of the LSH1 and LPH1 can be calculated with Eq. (12) for the case of R = 15 cm, d=(R-(R-d)) = 15−14.972 = 0.028cm, w = 1.56cm and h = 0.97cm, respectively. With those parameters, the conversion table can be obtained, which is shown in Fig. 9(b).

Third, the LSH1 is then rotated with the rotational matrix calculated with the rotational axis and angle, which can be done just by directly shifting the LSH1 along the horizontal and vertical directions with their corresponding displacements, which enables saving the processing time when it is compared with that for rotating the LSH1 with the rotational matrix as seen in Fig. 10(a). Here, the horizontal and vertical displacements are estimated to be 0.375=(0.5/(20/15))≈463 pixels and 0.24=(0.32/(20/15))≈296 pixels respectively under the condition of that r1 is estimated as around 20cm. Furthermore, the rotational operation of the LSH1 may cause the non-overlapped region to be appeared, but the size of this region can be kept very small in comparison with that of the whole hologram pattern since the transverse arc of the LSH1 becomes smaller than that of the 3-D object during the rotational operation. Here the rotational angle is calculated to be 0.03rad, and radius of the LSH1 is set to be 15cm, thus the transverse arc length of the LSH1 is calculated to be about 0.45cm based on the arc length formula.

 figure: Fig. 10.

Fig. 10. Schematic diagram of the (a) third process for the rotation of the LSH1 to obtain its rotated version and (b) fourth process for transformation of the rotated LSH1 to its corresponding version of the LPH1.

Download Full Size | PDF

Fourth, the rotated LSH1 is then transformed back into the its corresponding version of the LPH1 with the same process of the second process except that the multiplication operation of the conversion table is just replaced with its division operation as seen in Fig. 10(b).

Fifth, as seen in Fig. 11(a) the hologram pattern for the blank region is calculated with the conventional NLUT, WRP and 1-D NLUT methods directly by considering the relative positions in the horizontal and vertical directions as well as the size of the blank region. The widths of the blank region in the horizontal and vertical directions are estimated to be 0.375cm and 0.24cm, which correspond to about 463 and 296 pixels, respectively. Thus, the size of the blank region becomes 48.78% (=(463×1200 + 296×1920)/(1920×1200)) of the whole hologram. Here, it must be noted that the ratio of the size between the blank and whole region of the hologram may depend on the geometric structure of the whole 3-D rotational model composed of the 3-D object and LSH1.

 figure: Fig. 11.

Fig. 11. Schematic diagram of the (a) Fifth process for propagating the rotated LPH1 to its corresponding OPH1 and (b) Sixth process for the hologram calculation of the blank region.

Download Full Size | PDF

Sixth, the rotated LPH1 is propagated back to the OPH, which is also the same operation with the first process except the propagation direction is just reversed. As mentioned above, the rotated version of the OPH1 has been obtained from the proposed rotational-motion compensation process between the 1st and 2nd frames, corresponding to the estimated version of the 2nd frame OPH (OPH2), as seen in Fig. 11(b).

3.5 Error correction for the compensated hologram pattern

The similarity between the estimated and actual object images can be measured in terms of the SNR, where a threshold value is set to be 30 dB [42]. Figures 12(a-1), 12(a-2), 12(a-3) and 12(a-4) show the estimated, actual, difference (error) and error-corrected images of the 2nd frame, respectively. Among them, the number of different object points is 531, which is around 16.16% of the actual image of the 2nd frame with 3,286 object points. Here, the SNR value of the 2nd-frame is calculated to be 28.26dB, so that an additional error-correction process was also carried out. In addition, Figs. 12(b-1), 12(b-2) and 12(b-3) represent the estimated OPH2, calculated hologram pattern for the error image, and error-corrected OPH2, respectively.

 figure: Fig. 12.

Fig. 12. (a-1) Estimated image, (a-2) Actual image, (a-3) Error image and (a-4) Error-corrected images of the 2nd-frame, and (b-1) Estimated OPH2, (b-2) Calculated hologram pattern for the error image and (b-3) Error-corrected OPH2.

Download Full Size | PDF

Here, the error correction can be done just by adding the calculated hologram pattern for the error image of Fig. 12(b-2) to the estimated OHP2 of Fig. 12(b-1), which resulted in an error-corrected OHP2 of Fig. 12(b-3). With this error-correction process, the estimated image Fig. 12(a-1) with the proposed method can be made accurately matched with that of its actual image of 12(a-2).

3.6 Comparative performance analysis of the conventional CGH algorithms with and without employing the proposed method

Here, the operational performances of the conventional 1-D NLUT and its two modified versions employing the conventional CH-RMC and proposed SH-3DRMC schemes, which are called the original, CH-RMC-based, and SH-3DRMC-based 1-D NLUT methods, respectively, are comparatively discussed in detail in terms of the average number of calculated object points (ANCOP) and average calculation time (ACT). Table 1 shows the ANCOPs and ACTs for each of those original, CH-RMC and SH-3DRMC-based 1-D NLUT methods, respectively.

Tables Icon

Table 1. Specification of the detailed time costs for each step of the CH-RMC and SH-3DRMC-based NLUT methods on the average for the test video scenario with 30 frames.

As seen in Table 1, the ANCOPs and ACTs of the original, CH-RMC, and SH-3DRMC-based 1-D NLUT methods have been estimated to be 3261, 5052, 527 and 8.47s, 15.16s, 5.79s, respectively. Here, the ANCOP and ACT the CH-RMC-based 1-D NLUT have been found to be rather increased by 54.92% and 78.65%, respectively, in comparison with those of the original 1-D NLUT. Since the conventional CH-RMC method was proposed only for the 2-D rotational-motion compensation, it may not be effective in dealing with this 3-D rotational motion compensation problem. In other words, the 3-D rotational motion composed of three rotational-motions along the x, y and z-axis, couldn’t be compensated just by rotating a cylindrical hologram only around the y-axis, which might result in rather increases of those ANCOP and ACT values even though the motion-compensation method was employed. On the other hand, the ANCOP and ACT values of the 1-D NLUT employing the proposed SH-3DRMC method have been found to be greatly decreased by 83.84% and 31.64%, respectively.

Furthermore Table 1 illustrates the detailed time costs for each of those six processes of the SH-3DRMC-based 1-D NLUT method, such as the parameter estimation (PE), propagation between the OPH and LPH (POL), conversion between the LPH and LSH (CLL), rotation of the LSH (RL), hologram calculation of the blank region (HCB) and error correction (EC). For comparison, the time costs for those six processes of the CH-RMC-based 1-D NLUT method are also included in Table 1.

As seen in Table 1, the calculation times taken for the parameter estimation and rotation of the LSH processes of the CH-RMC and SH-3DRMC-based 1-D NLUT methods have been calculated to be 43ms, 48ms, and 9ms, 11ms, respectively. In the CH-RMC-based method, two-dimensional parameter extraction and rotational processes are carried out, whereas three-dimensional processes of them are performed in the SH-3DRMC-based method, which results in slight differences in time cost between them. On the other hand, the calculation times taken for the propagation between the OPH and LPH and conversion between the LPH and LSH processes have been equally calculated to be 130ms and 12ms, respectively, in both of the CH-RMC and SH-3DRMC-based 1-D NLUT methods since the propagation operation between the OPH and LPH and conversion operation between the LPH and LSH are identically carried out in those methods.

In fact, the calculation times taken for those four processes of parameter estimation, propagation between the OPH and LPH, conversion between the LPH and LSH, and rotation of the LSH could be considered as the very trivial values in both methods in comparison with those of two other hologram calculation of the blank region and error correction processes as seen in Table 1. The calculation times taken for the hologram calculation of the blank region and error correction processes of the CH-RMC and SH-3DRMC-based 1-D NLUT methods have been found to be 1.89s, 12.94s and 3.90s, 1.55s, respectively, which correspond to 12.47%, 85.33% and 67.90%, 26.82% of the total time costs, respectively.

For the case of the hologram calculation of the blank region, only the blank regions due to the horizontal shifts of the LCHs are to be calculated in the CH-RMC-based method, whereas in the SH-3DRMC-based method, those blank regions caused by both of the horizontal and vertical shifts of the LSHs are to be calculated, which results in some differences in their time costs. Moreover, as seen in Table 1, the ANCOP of the CH-RMC-based method has been increased up to 154.92%, but decreased down to 16.16% in the SH-3DRMC-based method in comparison with those of their original methods, which leads to some differences in their time costs since the time taken for the error correction process depends on the number of calculated object points.

As mentioned above, Table 1 shows that the ACT of the CH-RMC-based 1-D NLUT has been increased up to 178.65%, whereas the ACT of the SH-3DRMC-based 1-D NLUT has been decreased down to 68.36% in comparison with that of the original 1-D NLUT. These results reveal that the computational speed of the conventional 1-D NLUT method can be greatly enhanced in case the proposed SH-3DRMC scheme is employed. In addition, here it is also noted that this proposed scheme can be applied to any conventional CGH algorithm for enhancing its computational speed.

Table 2 also illustrates the additional experimental results for two other NLUT and WRP methods employing the conventional CH-RMC and proposed SH-3DRMC methods. As seen in Table 2, the ACTs of the original NLUT and WRP methods, as well as those of their CH-RMC and SH-3DRMC-based versions have been compared. Just like the case of the 1-D NLUT method, the ACTs of the CH-RMC-based NLUT and WRP methods have been increased up 179.57% and 181.98%, whereas those of the SH-3DRMC-based NLUT and WRP methods have been greatly increased by 65.25% and 58.63%, respectively, in comparison with those of their original versions.

Tables Icon

Table 2. Comparisons of the computational speed of the original NLUT, WRP and 1-D NLUT methods, and their CH-RMC and SH-3DRMC-based versions for the test video scenario with 30 frames.

In other words, the averaged computational speed for three conventional NLUT, WRP and 1-D NLUT methods employing the proposed SH-3DRMC has been found to be greatly increased by 64.08% in comparison with their original methods, which means an enhancement of 35.92% in computational speed. These good experimental results confirm the applicability of the proposed method to any conventional CGH algorithm and its feasibility in the practical application.

3.7 Reconstruction of the holographic 3-D video

Figures 13(a1)–13(a4) and 13(b1)–13(b4) show four 3-D scene images of the 1st, 11th, 21st and 30th frames, which are computationally and optically reconstructed from the holographic videos generated with the SH-3DRMC-based NLUT method. The fixed ‘Sphere’ images are reconstructed at the depth of 323mm, while the ‘Airplane’ images are reconstructed at the depth planes of 348 mm, 325 mm, 328 mm and 333 mm measured from the hologram plane, for each of the 1st, 11th, 21st and 30th-frames of the test video scenario, respectively. 30 frames of the computationally and optically reconstructed 3-D scenes which are compressed into a video files of Visualization 1 and Visualization 2, respectively, are also included in Fig. 13.

 figure: Fig. 13.

Fig. 13. Computationally and optically reconstructed input 3-D scene images from the holographic video generated with the SH-3DRMC-based 1-D NLUT method for the test video scenario (Visualization 1, Visualization 2): (a1)–(a4) Computationally reconstructed input 3-D scene images, (b1)–(b4) Optically reconstructed input 3-D scene images of the 1st, 11th, 21th and 30th frames, respectively.

Download Full Size | PDF

As seen in the optically-reconstructed input scenes, the ‘Airplane’ image of the 21st-frame of Fig. 13(b3) looks blurred more than those of the 11st and 30th-frames of Figs. 13(b2) and 13(b4), which results from the fact that 3-D scenes have been optically reconstructed being focused on the fixed ‘Sphere’ image. That is, the depth distance between the ‘Airplane’ and ‘Sphere’ objects at the 11th-frame gets a little bit larger than those of the 21st and 30th-frames, which might cause the ‘Airplane’ image of the 21st-frame to be out of focused a little bit more.

Successful experimental results on the reconstruction of those holographic videos for the test video scenario may finally confirm the feasibility of the proposed method. However, in practice, the proposed method may not be applied to the special case where an object moves along the trajectory with a continuously-varying curvature since this moving trajectory of the object cannot be properly segmented into a set of local arcs.

4. Conclusions

In this paper, a new SH-3DRMC scheme has been proposed for accelerated generation of the holographic video of a 3-D object freely-moving in space with many curvatures. All those rotational motions of the object made on each arc could be compensated with this proposed method, which results in a great reduction of computational complexity of any conventional CGH algorithm. Experiments with a test scenario show that the computational speed for those three conventional NLUT, WRP and 1-D NLUT methods employing the SH-3DRMC scheme have been found to be enhanced by 35.92%, on the average, in comparison with those of their original methods, which finally confirms the feasibility of the proposed method.

Funding

MSIT (Ministry of Science and ICT), Korea, under the ITRC support program (IITP-2017-01629) supervised by the IITP, Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (2018R1A6A1A03025242).

Disclosures

The authors declare no conflicts of interest.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. C. J. Kuo and M. H. Tsai, Three-Dimensional Holographic Imaging (John Wiley & Sons, 2002).

3. T.-C. Poon, Digital Holography and Three-dimensional Display (Springer Verlag, 2007).

4. X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.

5. F. Yaras, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]  

6. Ag, HoloEye Photonics, “GAEA 10 Megapixel Phase Only Spatial Light Modulator (Reflective),” http://holoeye.com/spatial-light-modulators/gaea-4k-phase-only-spatial-light-modulator/.

7. M. Makowaki, I. Ducin, K. Kakarenko, A. Kolodziejczyk, A. Siemion, A. Siemion, J. Suszek, M. Sypek, and D. Wojnowski, “Efficient image projection by Fourier electroholography,” Opt. Lett. 36(16), 3018–3020 (2011). [CrossRef]  

8. H. Nakayama, N. Takada, Y. Ichihashi, S. Awazu, T. Shimobaba, N. Masuda, and T. Ito, “Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,” Appl. Opt. 49(31), 5993–5996 (2010). [CrossRef]  

9. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]  

10. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52(1), A201–A209 (2013). [CrossRef]  

11. T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013). [CrossRef]  

12. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

13. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calclulation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

14. D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express 23(2), 1740–1747 (2015). [CrossRef]  

15. T. Tommasi and B. Bianco, “Frequency analysis of light diffraction between rotated planes,” Opt. Lett. 17(8), 556–558 (1992). [CrossRef]  

16. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). [CrossRef]  

17. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]  

18. Y. Pan, X. Xu, S. Solanki, X. Liang, R. Tanjung, C. Tan, and T. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009). [CrossRef]  

19. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” in Digital Holography and Three-Dimensional Imaging, 2009 OSA Technical Digest Series (Optical Society of America, 2009), pp. DWC4.

20. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000). [CrossRef]  

21. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]  

22. Y. Kimura, R. Kawaguchi, T. Sugie, T. Kakue, T. Shimobaba, and T. Tto, “Circuit design of special-purpose computer for holography HORN-8 using eight Virtex-5 FPGAs,” in Proc. 3D Syst. Appl.S3–2 (2015).

23. T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, and Y. Endo, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018). [CrossRef]  

24. H. G. Kim and Y. M. Ro, “Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object,” Opt. Express 25(24), 30418–30427 (2017). [CrossRef]  

25. T. Shimobaba and T. Ito, “Fast generation of computer-generated holograms using wavelet shrinkage,” Opt. Express 25(1), 77–87 (2017). [CrossRef]  

26. D. Blinder and P. Schelkens, “Accelerated computer generated holography using sparse bases in the STFT domain,” Opt. Express 26(2), 1461–1473 (2018). [CrossRef]  

27. P. W. M. Tsang and T. C. Poon, “Fast generation of digital holograms based on warping of the wavefront recording plane,” Opt. Express 23(6), 7667–7673 (2015). [CrossRef]  

28. Z. Zeng, H. Zheng, Y. Yu, and A. K. Asundi, “Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm,” Opt. Lasers. Eng. 93, 47–54 (2017). [CrossRef]  

29. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

30. Y. Zhao, K. C. Kwon, M. U. Erdenebat, M. S. Islam, S. H. Jeon, and N. Kim, “Quanlity enhancement and GPU acceleration for a full-color holographic system using a relocated point cloud gridding method,” Appl. Opt. 57(15), 4253–4262 (2018). [CrossRef]  

31. M.-W. Kwon, S.-C. Kim, S.-E. Yoon, Y.-S. Ho, and E.-S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015). [CrossRef]  

32. M.-W. Kwon, S.-C. Kim, and E.-S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016). [CrossRef]  

33. S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008). [CrossRef]  

34. S.-C. Kim, X.-B. Dong, M.-W. Kwon, and E.-S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013). [CrossRef]  

35. X.-B. Dong, S.-C. Kim, and E.-S. Kim, “MPEG-based novel-look-up-table method for accelerated computation of digital video holograms of three-dimensional objects in motion,” Opt. Express 22(7), 8047–8067 (2014). [CrossRef]  

36. X.-B. Dong, S.-C. Kim, and E.-S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express 22(14), 16925–16944 (2014). [CrossRef]  

37. H. K. Cao and E. S. Kim, “Full-scale one-dimensional NLUT method for accelerated generation of holographic videos with the least memory capacity,” Opt. Express 27(9), 12673–12691 (2019). [CrossRef]  

38. K. Matsushima, “Formulation of the rotational transformation of wave fields and their application to digital holography,” Appl. Opt. 47(19), D110–D116 (2008). [CrossRef]  

39. D. Blinder, C. Schretter, and P. Schelkens, “Global motion compensation for compressing holographic videos,” Opt. Express 26(20), 25524–25533 (2018). [CrossRef]  

40. H. K. Cao, S. F. Lin, and E. S. Kim, “Accelerated generation of holographic videos of 3-D objects in rotational motion using a curved hologram-based rotational-motion compensation method,” Opt. Express 26(16), 21279–21300 (2018). [CrossRef]  

41. Wikipedia, ‘Rodrigues’ rotation formula’, https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula.

42. ISO, I, “12232: Photography-Electronic Still Picture Cameras: Determination of ISO Speed,” International Organization for Standardization, Geneva, Switzerland (1997).

Supplementary Material (2)

NameDescription
Visualization 1       Computationally reconstructed input 3-D scene images from the holographic video generated with the SH-3DRMC-based 1-D NLUT method for the test video scenario.
Visualization 2       Optically reconstructed input 3-D scene images from the holographic video generated with the SH-3DRMC-based 1-D NLUT method for the test video scenario.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Overall functional block-diagram of the proposed method.
Fig. 2.
Fig. 2. Operational process of the proposed method for a 3-D object moving in space along the rotational pathway with three locally-different curvatures.
Fig. 3.
Fig. 3. Schematic diagram for extracting the rotational motion parameters of the object moving along the blue arc in space for its 3-D rotational motion compensation.
Fig. 4.
Fig. 4. Schematic diagram of the propagation process between the B-OPH and B-LPH.
Fig. 5.
Fig. 5. Conceptual diagram of the proposed B-LPH1-to-B-LSH1 conversion process.
Fig. 6.
Fig. 6. Schematic diagram of the 3-D rotational-motion compensation process.
Fig. 7.
Fig. 7. Experimental setup of the proposed system composed of digital and optical processes.
Fig. 8.
Fig. 8. Configuration of the 3-D moving trajectory of the ‘Airplane’ on the test video scenario with 30 video frames: (a) Top view, (b) Front view and (c) Left view.
Fig. 9.
Fig. 9. Schematic diagram of the (a) 1th process of the propagation of the OPH1 to the LPH1 and (b) 2nd process of transformation of the LPH1 into the LSH1.
Fig. 10.
Fig. 10. Schematic diagram of the (a) third process for the rotation of the LSH1 to obtain its rotated version and (b) fourth process for transformation of the rotated LSH1 to its corresponding version of the LPH1.
Fig. 11.
Fig. 11. Schematic diagram of the (a) Fifth process for propagating the rotated LPH1 to its corresponding OPH1 and (b) Sixth process for the hologram calculation of the blank region.
Fig. 12.
Fig. 12. (a-1) Estimated image, (a-2) Actual image, (a-3) Error image and (a-4) Error-corrected images of the 2nd-frame, and (b-1) Estimated OPH2, (b-2) Calculated hologram pattern for the error image and (b-3) Error-corrected OPH2.
Fig. 13.
Fig. 13. Computationally and optically reconstructed input 3-D scene images from the holographic video generated with the SH-3DRMC-based 1-D NLUT method for the test video scenario (Visualization 1, Visualization 2): (a1)–(a4) Computationally reconstructed input 3-D scene images, (b1)–(b4) Optically reconstructed input 3-D scene images of the 1st, 11th, 21th and 30th frames, respectively.

Tables (2)

Tables Icon

Table 1. Specification of the detailed time costs for each step of the CH-RMC and SH-3DRMC-based NLUT methods on the average for the test video scenario with 30 frames.

Tables Icon

Table 2. Comparisons of the computational speed of the original NLUT, WRP and 1-D NLUT methods, and their CH-RMC and SH-3DRMC-based versions for the test video scenario with 30 frames.

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

U ( x , y ) = exp ( i 2 π λ z ) i λ z u ( x o , y o ) exp ( i π λ z ( ( x x o ) 2 + ( y y o ) 2 ) ) d x o d y o ,
{ ( x 1 0 x 1 ) 2 + ( y 1 0 y 1 ) 2 + ( z 1 0 z 1 ) 2 = r 1 ( x 1 1 x 1 ) 2 + ( y 1 1 y 1 ) 2 + ( z 1 1 z 1 ) 2 = r 1 ( x 1 2 x 1 ) 2 + ( y 1 2 y 1 ) 2 + ( z 1 2 z 1 ) 2 = r 1 ( x 2 0 x 1 ) 2 + ( y 2 0 y 1 ) 2 + ( z 2 0 z 1 ) 2 = r 1 ,
θ 1 1 = arccos ( O 1 A 1 0 O 1 A 1 1 | O 1 A 1 0 | | O 1 A 1 1 | ) ,
O 1 A 1 0 × O 1 A 1 1 = ( y 1 0 z 1 1 z 1 0 y 1 1 ) i + ( z 1 0 x 1 1 x 1 0 z 1 1 ) j + ( x 1 0 y 1 1 y 1 0 x 1 1 ) k
( x k y k z k ) = ( y 1 0 z 1 1 z 1 0 y 1 1 z 1 0 x 1 1 x 1 0 z 1 1 x 1 0 y 1 1 y 1 0 x 1 1 ) ,
R M = I + ( sin θ 1 1 ) K + ( 1 cos θ 1 1 ) K 2 ,
K = [ 0 z k y k z k 0 x k y k x k 0 ] ,
( x B L P H 1 y B L P H 1 z B L P H 1 ) = ( x 1 y 1 z 1 R + d ) ,
U B L P H 1 = F 1 { F ( U B O P H 1 ) F { e i 2 π / λ z B L P H 1 / e i 2 π / λ z B L P H 1 i λ z B L P H 1 i λ z B L P H 1 exp [ i π λ z B L P H 1 ( x 2 B L P H 1 + y 2 B L P H 1 ) ] } } ,
W = 2 z tan ( α ) = 2 z tan [ arcsin λ 2 p ] ,
T c o n v ( x , y ) = exp ( i k z ( x , y ) ) ,
z ( x , y ) = d R + R 2 ( ( w 2 x ) 2 + ( h 2 y ) 2 ) ,
d = R R 2 ( ( w 2 ) 2 + ( h 2 ) 2 ) ,
U L S H 1 ( x , y ) = U L P H 1 ( x , y ) T c o n v ( x , y ) ,
U R B L S H 1 = U B L S H 1 R M ,
U R B L P H 1 = U R B L S H 1 / T C O N V ,
T C O N V ( x , y ) = exp ( i k ( z ( x , y ) ) ) ,
T C O N V ( x , y ) = 1 / exp ( i k z ( x , y ) ) = 1 / T C O N V ( x , y ) ,
U R B L P H 1 ( x , y ) = U R B L S H 1 ( x , y ) T C O N V ( x , y ) = U R B L S H 1 ( x , y ) / T C O N V ( x , y ) ,
L = θ 1 1 π R / 180 ,
U R B O P H 1 = F 1 { F ( U R B L P H 1 ) F { e i k z R B L P H 1 / e i k z R B L P H 1 i λ z R B L P H 1 i λ z R B L P H 1 exp [ i π λ z R B L P H 1 ( x 2 R B L P H 1 + y 2 R B L P H 1 ) ] } } , = U E s t i m a t e d B O P H 2
S N R = 10 log 10 { x = 1 M y = 1 N u ~ ( x , y ) 2 / x = 1 M y = 1 N u ~ ( x , y ) 2 x = 1 M y = 1 N [ u ~ ( x , y ) u 2 ( x , y ) ] 2 x = 1 M y = 1 N [ u ~ ( x , y ) u 2 ( x , y ) ] 2 } ,
u ~ = u 1 R M ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.