## Abstract

A spherical hologram-based three-dimensional rotational-motion compensation (SH-3DRMC) method is proposed for the accelerated generation of holographic videos of a three-dimensional (3-D) object moving in space along the arbitrary trajectory with many locally-different curvatures. All those 3-D rotational motions of the object made on each arc can be compensated just by rotating their local spherical holograms along the spherical surfaces matched with the object’s moving trajectory using the estimated rotation-axes and angles, which enables a massive reduction of computational complexity of the conventional hologram-generation algorithm and results in an accelerated calculation of holographic videos. Experiments with a test video show that the average calculation times of the conventional NLUT, WRP and 1-D NLUT methods employing the proposed SH-3DRMC scheme have been noticeably reduced by 34.75%, 41.37% and 31.64%, respectively, in comparison with those of their original methods. These good experimental results confirm the feasibility of the proposed system.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Thus far, the electro-holographic display based on computer-generated holograms (CGHs) has been suffered from a couple of critical problems in its practical application [1–5]. One of them is the unavailability of the large-scale high-resolution spatial-light-modulators (SLMs) to reconstruct the holographic data into the 3-D videos because the hologram resolution is in the order of the light wavelength [6]. The other one is the computational complexity in generating the holographic videos in real-time [7,8]. Thus, a lot of research works in electro- holographic displays have been focused on the development of the fast CGH algorithms [9–34].

For this, many kinds of CGH algorithms have been proposed, which include the classical ray-tracing (RT) method [9–11], look-up table (LUT) [12], novel look-up table (NLUT) [29], wave-front recording plane (WRP)-based [13–15], polygon-based [16,17], SLUT [18], image hologram-based [19], recurrence relation-based [20], double-step Fresnel diffraction (DSF)-based [21], FPGA-based [22,23], sparse-based [24–26], warping-based [27], accelerated point-based Fresnel diffraction [28], and GPU-based [30–32] methods.

Among them, the NLUT was presented as one of the accelerated CGH algorithms [29]. In this method, a 3-D object is approximated as a set of discretely-sliced image planes with their own depths, and only the fringe patterns of the center-located object points on each image plane, which are called principal-fringe-patterns (PFPs), are pre-calculated and stored. Fringe patterns for the other object points on each image plane are then generated just by shifting and adding of their corresponding PFPs without any additional calculation based on a unique shift-invariant property of the PFP.

Here it must be noted that the NLUT usually generates the CGH patterns of the 3-D object in motion based on a two-step process such as the pre- and main-processes [31–36]. In the pre-processing, the number of object points to be calculated is to be minimized just by removing as much of redundant object data between the consecutive 3-D video frames as possible by employing various motion estimation and compensation algorithms used in the conventional digital communication system [4,19]. This data compression operation of the NLUT method can be carried out based on the shift-invariance property of the PFP. In the following main-processing, CGH patterns only for those compressed object data are to be calculated with repetitive shifting and adding operations of their corresponding PFPs [31,33]. Thus, the computational speed of the NLUT method can be enhanced not only by reducing the number of calculated object points in the pre-processing, but also by shortening the CGH calculation time in the main-processing.

There are several types of NLUTs employing this pre-processing for eliminating the temporal redundancy between the two consecutive video frames, which include the temporal redundancy-based NLUT(TR-NLUT) [33], motion compensation-based NLUT(MC-NLUT) [32], MPEG-based NLUT(MPEG-NLUT) [35] and three-directional motion compensation- based NLUT(3D-MC-NLUT) [36] methods, as well as the full-scale one-dimensional NLUT (1-D NLUT) method enabling the faster generation of holographic videos with the minimum memory capacity [37]. It is certain that the computational speed of these NLUT methods has been greatly enhanced, but they still have operational limitations in their practical applications. As mentioned above, all those NLUT methods employing the motion estimation and compensation process can be operated based on the shift-invariance property of the PFP [34–36]. This property, however, allows only the *x*, *y* and *z*-directional motion vectors to be estimated and compensated. Thus, these methods might be effective in situations where 3-D objects move in linear motion with small depth variations [34–36].

However, in the real world, 3-D objects usually move in random motions with many different curvatures on the ground or space, which cannot be compensated with the traditional NLUT-based motion compensation methods. For dealing with the rotational motion, a simple rotational transformation technique was formulated [38]. The rotation of a 3-D object in space always brings new information in the following frames due to the perspective changes of the object. But those data happen to be lost in the rotational transformation technique. In addition, a global motion compensation method was also presented for the compression of holographic videos [39].

But, as a feasible approach for fast generation of the holographic videos of the 3-D object randomly moving with many curvatures, the curved hologram-based rotational motion compensation (CH-RMC) method based on a concept of rotation-invariance of the curved hologram was proposed [40,41]. In this method, rotational motions of the 3-D object made on every arc can be compensated just by rotating their local curved holograms on the curving surfaces matched with the trajectory on which the object moves. Thus, with this CH-RMC algorithm, most hologram patterns of the 3-D object in rotational motion can be generated without their direct calculation processes, which results in a dramatic reduction of the overall calculation time of the holographic video. Nevertheless, this CH-RMC method can be only applied to the 3-D object moving on the ground since the local curved holograms with the cylindrical forms are to be aligned perpendicular to the ground, and additional complex operations are required for the 3-D rotation case, which then increase its time cost and limit the efficiency of its rotational compensation. It means only the two-dimensional (2-D) rotational motions just on the ground can be properly compensated with this method.

Thus, in this paper, as a new approach for accelerated generation of the holographic videos of the 3-D object in rotational-motion in space, the spherical hologram-based 3-D rotational-motion compensation (SH-3DRMC) scheme is proposed. In this method, spherical forms of the local holograms are employed for compensating the object’s rotational motions on each curvature. Here, motion compensations can be carried out just by rotating the local spherical holograms along the spherical surfaces matched with the moving trajectory of the object with the estimated rotation-axes and angles between the two successive frames. Thus, most hologram patterns of the previous frames can be reused to generate those of the current frames, which results in a significant reduction of the overall CGH calculation time of the 3-D object moving in space. Here must be noted that this proposed scheme can be applied to any conventional CGH algorithm to enhance its computational speed, which would be its most important feature.

To confirm the feasibility of the proposed SH-3DRMC method, the operational principle of the proposed method is analyzed based on wave-optics, and experiments with a test 3-D object in rotational-motion in space are carried out with three conventional CGH algorithms of NLUT, WRP and 1-D NLUT. Operational performances of those NLUT, WRP and 1-D NLUT employing the SH-3DRMC scheme are then discussed in terms of the computational speed in comparison with those of their original versions.

## 2. Proposed method

Figure 1 shows an overall functional diagram of the proposed SH-3DRMC method, which is composed of a four-step process. Here, a 3-D object of the ‘*Airplane*’ is assumed to move along the curved pathway with three locally-different arcs in space, where coordinates of the rotational axes and angles in each local arc can be estimated.

Initially, just by applying a segmentation strategy to the randomly-moving trajectory of the 3-D object in space, the object’s trajectory can be divided into a set of local spherical arcs with different radii, and then each spherical arc with its radius can be compensated just by using their local spherical holograms (LSHs) that are transformed from the local planar holograms (LPHs), where the LPHs can be obtained just by propagating the original planar holograms (OPHs). As seen in Fig. 2, the object flies from the location of *A _{1}* (

*x*,

_{1}*y*,

_{1}*z*) to the location of

_{1}*A*(

_{4}*x*,

_{4}*y*,

_{4}*z*) in space along the rotational pathway, where the moving trajectory of the object can be decomposed into three kinds of local arcs of

_{4}*A*-

_{1}*A*,

_{2}*A*-

_{2}*A*and

_{3}*A*-

_{3}*A*, which are highlighted with three colors of blue (B), red (R) and green (G), respectively. Here, those rotational motions on each arc can be compensated with their local spherical holograms (LSHs), which are denoted here as the B-LSH, R-LSH and G-LSH, respectively. In addition, local planar holograms (LPHs) for each of those three arcs are also denoted here as the B-LPH, R-LPH and G-LPH, respectively, and located just behind each of their corresponding B-LSH, R-LSH, and B-LSH whose centers are identical with those local spherical arcs on which the object moves, and radii are same for all LSHs.

_{4}For the case of the blue spherical arc in Fig. 2, the 1^{st}-frame hologram pattern of the object, which is called the 1^{st}-frame B-original plane hologram (B-OPH_{1}), is initially calculated with one of the conventional CGH algorithms. Then, in the second step, rotational motions of the object in this arc are estimated and evaluated to extract the rotational parameters such as the rotational axes and angles between the two successive frames. In the third step, 3-D rotational motion compensation processes are sequentially carried out between the two consecutive frames, which are divided into six sub-processes, such as 1) propagation of the B-OPH_{1} to the B-LPH_{1}, 2) Conversion of the B-LPH_{1} into the B-LSH_{1}, 3) rotation of the B-LSH_{1} with the estimated rotational axis and angle, 4) conversion of the rotated B-LSH_{1} into that of the B-LPH_{1}, 5) calculation of the hologram pattern for the blank region of the estimated B-LPH_{1}, and 6) propagation of the rotated B-LPH_{1} into that of the B-OPH_{1}, where the rotated B-OPH_{1} actually corresponds to the estimated 2^{nd}-frame B-OPH (B-OPH_{2}) obtained from the 3-D rotational motion compensation process between the 1^{st} and 2^{nd} frames. In the final step, differences between the estimated and actual B-OPH_{2} due to the erroneous points between the estimated and actual object images are corrected.

#### 2.1 Generation of the 1^{st}-frame B-OPH

In fact, the 1^{st}-frame B-OPH (B-OPH_{1}) of the object can be generated with one of the conventional CGH algorithms. With the CGH algorithm, hologram patterns denoted as *U*(*x, y*), can be calculated based on the Fresnel-diffraction equation of Eq. (1).

*u*(

*x*,

_{o}*y*),

_{o}*λ*and

*z*represent the image intensity, wavelength of the recording light and distance between the image and hologram planes, respectively.

#### 2.2 Estimation of three-dimensional rotational-motion parameters of the 3-D object

Figure 3 shows how to determine several parameters involved in 3-D rotational-motion compensation for the case of the blue spherical arc, which include the rotational axis and angle between the two successive frames of the 3-D image, and coordinates of the sphere on which the blue arc is located. The rotational matrix can be then obtained with these estimated rotational axis and angle, and with which the B-LSH and 3-D image of the current frame are to be rotated to compensate the rotational motion of the object between the two successive frames. From these rotational motion-compensated B-LSH and 3-D image of the current frame, those of the following frame can be estimated without its direct calculation process.

In addition, the coordinates of the B-LPH_{1}, as another important parameter, need to be calculated after the center coordinates of the sphere is determined. In fact, the B-LPH_{1}, has the same *x* and *y*-coordinates with the center of the local sphere since the B-LPH_{1} is just located in front of the center of the local sphere, and the *z*-coordinate of the B-LPH_{1} need to be individually calculated according to the geometry relation between the B-LPH_{1} and fixed local blue sphere, which is to be explained in detail below.

First of all, the center coordinates of the sphere is determined before calculating other parameters. As seen in Fig. 3(b), with four points located on the blue spherical arc, the center coordinates of the local blue sphere can be calculated. That is, four equations, which are given by Eq. (2), can be derived just by putting the coordinate values of those four points into the normal sphere function.

*r*denotes the radius of the local blue sphere of

_{1}*O*. The coordinate values of

_{1}*O*(

_{1}*x*) can be obtained just by solving those equations of Eq. (2). Here it must be noted that this calculation process of the center coordinates of the local sphere is required only when the object just moves into the different arcs such as the blue, red and green arcs located on their corresponding spheres. After the center coordinates of the blue sphere is obtained, the rotational axis and angle between the two points of

_{1}, y_{1}, z_{1}*A*(

_{1−0}*x*) and

_{1−0}, y_{1−0}, z_{1−0}*A*(

_{1−1}*x*), which are denoted as

_{1−1}, y_{1−1}, z_{1−1}*k*(

*x*) and

_{k}, y_{k}, z_{k}*θ*, respectively, can be calculated from Eqs. (3)–(5).

_{1−1}*O*⋅

_{1}A_{1−0}*O*, |

_{1}A_{1−1}*O*| and |

_{1}A_{1−0}*O*| represent the dot-product and lengths of two vectors of

_{1}A_{1−1}*O*and

_{1}A_{1−0}*O*, respectively. Then, due to the perpendicular relationship between the rotation-axis and plane composed of

_{1}A_{1−1}*O*and

_{1}A_{1−0}*O*, the cross-product of the two vectors of

_{1}A_{1−1}*O*and

_{1}A_{1−0}*O*can be given by Eq. (4).

_{1}A_{1−1}*k*(

*x*) can be given by Eq. (5) as follows.

_{k}, y_{k}, z_{k}*K*denotes the cross-product matrix of the rotational axis of

*k*(

*x*). The final parameter involved in the 3-D rotational motion compensation is the coordinate values of the B-LPH

_{k}, y_{k}, z_{k}_{1}, where the B-LPH

_{1}always locates in front of the center of its local blue sphere of

*O*as seen in Fig. 3(c). The radius of the B-LSH

_{1}_{1}is fixed, and symbolized as

*R*, as well as the distance between the centers of the B-LPH

_{1}and B-LSH

_{1}is denoted as

*d*. Then, the coordinate values of the B-LPH

_{1}, which is denoted as

*P*(

_{B-LPH1}*x*), can be calculated from Eq. (8).

_{B-LPH1}, y_{B-LPH1}, z_{B-LPH1}#### 2.3 3-D rotational motion compensation of the object with the B-LSH

As mentioned above, the 3-D rotational motion compensation of the object between the two consecutive frames can be carried out just by performing the following six sub-processes.

### 2.3.1 Propagation of the B-OPH_{1} to the B-LPH_{1}

As seen in Fig. 4, the B-OPH_{1} can be directly propagated to the B-LPH_{1} since the plane of B-LPH_{1} is in parallel with that of the B-OPH_{1} just with different depth. As illustrated in section 2.2, the coordinates of the B-LPH_{1} can be calculated according to the position of the center point of the local blue spherical arc, thus relative displacements along the *x*, *y* and *z*-axis can be directly calculated with the coordinates of the B-LPH_{1} minus those of the B-OPH_{1}, Therefore, the propagation of the B-OPH_{1} to the B-LPH_{1} can be carried out based on the angular propagation equation of Eq. (9) considering the horizontal, vertical and longitudinal distances between those two wavefronts.

*λ*and

*Z*denote the wavelength and

_{B-LPH1}*z*-coordinate of the B-LPH

_{1}, as well as

*U*and

_{B-LPH1}*U*, and ${\cal F}$ and ${{\cal F}^{ - 1}}$ represent the complex amplitudes of the B-LPH

_{B-OPH1}_{1}and B-OPH

_{1}, and Fourier and inverse-Fourier transforms, respectively.

### 2.3.2 Conversion of the B-LPH_{1} into the B-LSH_{1}

Figure 5 shows a conceptual diagram of the proposed B-LPH_{1}-to-B-LSH_{1} conversion process. The B-LSH_{1} is just located in front of the B-LPH_{1} and each pixel of the B-LPH_{1} corresponds to that of the B-LSH_{1}, where width and height of the B-LPH_{1} are denoted as *w* and *h*, as well as pixel pitch and diffraction angle are denoted as *p* and *α*, respectively. Then, the maximum transverse distance of the diffraction from the single pixel of the B-LPH_{1} can be calculated with Eq. (10).

*W*should be equal to 2

*p*in order to guarantee that the diffraction region of each pixel can transverse the width of only one pixel in the proposed method. Thus, the conversion of the B-LPH

_{1}into the B-LSH

_{1}can be accomplished just by being multiplied with the conversion table of Eq. (11). Where

*z*(

*x, y*), which represents the propagation distance from a pixel of the B-LPH

_{1}to the B-LSH

_{1}, can be given by Eq. (12). The maximum distance of

*d*between the pixels of the B-LPH

_{1}to its corresponding B-LSH

_{1}can be also calculated with Eq. (13). Thus, the conversion between the B-LPH

_{1}and B-LSH

_{1}can be given by Eq. (14). Where

*U*and

_{LSH1}*U*represent the complex amplitudes of the B-LSH

_{LPH1}_{1}and B-LPH

_{1}, respectively.

### 2.3.3 Rotation of the B-LSH_{1} with the estimated rotational axis and angle

As seen in Fig. 6(a), on the blue spherical arc, the object moves from *A _{1−0}* to

*A*between the 1

_{1−1}^{st}and 2

^{nd}frames with the rotational axis and angle of

*k*and

*θ*. Just by rotating the B-LSH

_{1−1}_{1}with these estimated rotational axis and angle, the rotated version of the B-LSH

_{1}can be obtained.

Then, the rotated B-LSH_{1}(R-B-LSH_{1}) can be obtained from the B-LSH_{1} just by being multiplied with the rotational matrix of *R _{M}* as seen in Eq. (15).

*U*and

_{B-LSH1}*U*represent the complex amplitudes of the B-LSH

_{R-B-LSH1}_{1}and R-B-LSH

_{1}, respectively. As seen in Fig. 6(a), the

*U*is rotated with the rotational axis and angle of

_{B-LSH1}*k*and

*θ*just by being multiplied with the

_{1−1}*R*, from which the

_{M}*U*can be then obtained. Here the R-B-LSH

_{R-B-LSH1}_{1}happens to be composed of overlapped, non-overlapped and meaningless areas as seen in Fig. 6(b), where the overlapped area is to be used as most part of the estimated B-LSH of the 2

^{nd}frame, whereas the small part of the non-overlapped area called a blank area, is to be filled up, and the meaningless area is to be thrown away.

### 2.3.4 Conversion of the rotated B-LSH_{1} into its corresponding version of the B-LPH_{1}

Now, the rotated B-LSH_{1}, which is equivalent to the 3-D rotational motion-compensated version of the B-LSH_{1}, is converted back into its rotated version of the B-LPH_{1}, which can be derived from the rotated B-LSH_{1} just by being divided by the conversion table of Eq. (11) as seen in Eq. (16). Here, multiplication and division processes of the conversion table are to be involved in LPH-to-LSH and LSH-to-LPH conversions, respectively, which are given by Eqs. (17–19).

*U*and

_{R-B-LSH1}*U*represent the complex amplitudes of the rotated B-LSH

_{R-B-LPH1}_{1}and B-LPH

_{1}, respectively.

### 2.3.5 Calculation of the hologram pattern for the blank region of the estimated B-OPH_{2}

As mentioned above, the rotated version of the B-LSH_{1} happens to consist of the overlapped and non-overlapped regions due to the rotation operation as seen in Fig. 6(b), which means that the estimated B-LPH_{2} has the blank region to be filled up together with the overlapped region to be reused. Thus, the hologram pattern for the blank region of the estimated B-LPH_{2} needs to be calculated with one of the conventional CGH algorithms.

Here, the size of the blank region seems to be a critical factor to determine the operational performance of the proposed 3-D rotational-motion compensation method because the additional CGH calculation time required for the blank region looks closely related to the size of the blank region. In fact, the size of the blank region may be determined by the rotational angle and radius of the B-LSH_{1}. For example, if the object rotates with an angle of *θ _{1−1}* along the spherical arc path, the B-LSH

_{1}is also rotated simultaneously with the same angle, and the length of the arc path where the B-LSH

_{1}is rotated, can be calculated with the arc length formula given by Eq. (20),

*L*and

*R*represent the length of the arc where the B-LSH

_{1}is rotated, and radius of the B-LSH

_{1}. Here the width of the blank region becomes approximately equal to be

*L*. Thus, as the rotation-angle becomes smaller, the corresponding size of the blank region can be decreased.

### 2.3.6 Propagation of the rotated B-LPH_{1} to its corresponding version of the B-OPH_{1}

As seen in Fig. 4, the propagation of the rotated B-LPH_{1} to its corresponding version of the B-OPH_{1} is same with the inverse-transformation process of the B-OPH_{1} to its corresponding version of the B-LPH_{1} mentioned at section 2.3.1, which can be described by Eq. (21).

_{1}obtained from the proposed 3-D rotational-motion compensation process between the 1

^{st}and 2

^{nd}frames, corresponds to the estimated version of the 2

^{nd}frame B-OPH, which is called as the estimated B-OPH

_{2}here.

#### 2.4 Error correction process

The estimated motion parameters may not be accurate due to the errors in motion estimation, which causes the compensated hologram not to be perfectly matched with the real hologram of the 2^{nd}-frame. Thus, the motion errors are to be evaluated by calculating the difference between the actual 2^{nd}-frame and compensated object images which can be obtained by rotating the 1^{st}-frame object image with the estimated motion parameter. However, the difference between the estimated and actual object images of the 2^{nd}-frame sometimes are so small that can be ignored. For the quantitative evaluation of the difference between the estimated and actual object images, a parameter of the SNR is employed, which is defined as Eq. (22),

*N*and

*M*denotes the width and length of the image, and

*ũ(x, y)*and

*u(x, y)*represent the pixels to be compared with the estimated and actual images of the 2

^{nd}-frame, respectively.

*ũ*(

*x*,

*y*) can be obtained just by rotating the 1

^{st}-frame image with the rotation matrix

*R*as given by Eq. (23). Thus, the similarity between the estimated and actual images of the 2

_{M}^{nd}-frame can be evaluated with the SNR value.

## 3. Experiments and results

#### 3.1 Overall configuration of the experimental setup

Figure 7 shows an overall experiment setup of the proposed system, which is composed of digital and optical processes. In the digital process, as seen in Fig. 7(a), hologram patterns for the input 3-D scene of the 1^{st}-frame are initially generated with three kinds of conventional CGH algorithms such as the NLUT, WRP and 1-D NLUT methods, and the other hologram patterns of the following frames are generated based on the proposed SH-3DRMC method. In addition, for the comparative performance analysis, not only the original NLUT, WRP and 1-D NLUT methods, but also those employing the conventional CH-RMC method, are also used for generation of the holographic video for the input 3-D scene. The code is under Matlab 2017 implemented on the personal computer with the CPU of 3.00GHz clock frequency and 64.0GB memory.

In the optical process as seen in Fig. 7(b), a green laser (G-laser) (StradusTM532 VORTRAN Laser Technology) is used as the light source, which is collimated and expanded by the laser collimator (LC) and beam expander (BE) (Model: HB-4XAR.14, Newport), then illuminated onto the SLM (spatial light modulator) where the calculated hologram pattern is loaded through a beam splitter (BL). In the experiments, a reflection-type amplitude-modulation mode SLM (Model: HOLOEYE LC-R-1080) with the resolution of 1920×1200 pixels and pixel-pitch of 8.1µm is employed and the off-axis reconstructed image is captured with the CCD camera (Thorlabs (1280*1024)).

In the experiment, an input 3-D scene with 30 frames of video images, where a ‘*Airplane*’ object moves around the fixed ‘*Sphere*’ along the curved pathway with a local arc, is employed as the test video scenario, and generated with the *3DS MAX*, which is shown in Fig. 8. Every input 3-D image has 256 depth planes where each depth plane is composed of 1280×720 pixels. In addition, sampling rates on the *x*-*y* plane and along the *z*-direction are all set to be 0.1*mm*. As seen in Fig. 8, a test 3-D object of the ‘*Airplane*’ is assumed to fly from the location of *A _{1−1}*(6.63

*cm*, −6.42

*cm*, −5.46

*cm*) to the locations of

*A*(−9.07

_{1−30}*cm*, 3.65

*cm*, −5.17

*cm*) with the arbitrary 3-D moving trajectory in space, where the top, front and left views of the moving trajectory of the object in space are shown in Fig. 8(a), (b) and (c), respectively.

#### 3.2 Generation of the 1^{st}-frame OPH

In the experiment, the conventional NLUT, WRP and 1-D NLUT methods are employed to calculate the 1^{st}-frame OPH (OPH_{1}) of the input 3-D scene where two kinds of hologram patterns corresponding to the ‘*Sphere*’ and ‘*Airplane*’ objects are separately calculated, and only the hologram pattern for the moving object of the ‘*Airplane*’ is involved in the 3-D rotational-motion compensation process to generate those hologram patterns of the following frames. Here, the resolution and pixel size of the hologram pattern are set to be 1920×1200 pixels and 8.1µ*m*, respectively, which are same with those of the SLM, and the OPH is set to be located on (0, 0, −40*cm*).

In the NLUT-based system, 256 numbers of PFPs for each depth plane of the input 3-D scene are stored in advance, where only the PFPs for the centered object points on each depth plane are pre-calculated, and PFPs for the other object points on the same depth plane are obtained just by shifting those PFPs. Then, CGH patterns of the ‘*Airplane*’ object are calculated just by multiplying each of the object intensities to its corresponding PFPs and adding them together.

On the other hand, in the 1-D NLUT-based system, only a pair of half-sized 1-D B-PFP and DC-PFP are pre-calculated and stored based on the concentric-symmetry property of the PFP, and all depth planes are obtained from this pair of half-sized 1-D PFPs based on its thin-lens property, which enables minimization of the required memory size down to a few KB regardless of the number of depth planes. Moreover, all CGH calculations in in this method are fully one-dimensionally performed based on its shift invariance property, which also enables minimization of its overall hologram calculation time. In the WRP method, a two-step operation is required. In the first step, the complex amplitudes of the 3-D object on a wavefront recording plane (WRP) are calculated, where the WRP acts as a virtual plane placed between the object and hologram pattern. In the second step, the CGH pattern is generated just by calculating the diffracted patterns from the WRP to CGH planes based on the Fresnel diffraction.

#### 3.3 Estimation of 3-D rotational-motion parameters of the object

Above all, the center of the local sphere where the moving trajectory of the 3-D object is located, is calculated by taking four points belonging to the trajectory arc into the sphere function set of Eq. (2), which is denoted as *O _{1}*. Here, four points of

*A*(

_{1−0}*x*,

_{1−0}*y*,

_{1−0}*z*),

_{1−0}*A*(

_{1−1}*x*,

_{1−1}*y*,

_{1−1}*z*),

_{1−1}*A*(

_{1−2}*x*,

_{1−2}*y*,

_{1−2}*z*) and

_{1−2}*A*(

_{1−3}*x*,

_{1−3}*y*,

_{1−3}*z*) with their coordinate values of

_{1−3}*x*=6.63

_{1−0}*cm*,

*y*=−6.42

_{1−0}*cm*,

*z*=−5.46

_{1−0}*cm*,

*x*=6.26

_{1−1}*cm*,

*y*=−6.10

_{1−1}*cm*,

*z*=−5.76

_{1−1}*cm*,

*x*=5.61

_{1−2}*cm*,

*y*=−5.77

_{1−2}*cm*,

*z*=−6.03

_{1−2}*cm*,

*x*=5.09

_{1−3}*cm*,

*y*=−6.28

_{1−3}*cm*,

*z*=−5.43

_{1−3}*cm*, respectively, are used to solve this function set, and the center coordinates of the local sphere is estimated to be

*O*(0, 0, 12.27

_{1}*cm*). Two other critical parameters required for the 3-D rotational-motion compensation process are the rotational axes and angles between the two successive frames, which are used for rotating the LSHs to carry out the 3-D rotational-motion compensation.

The rotational angle between the two consecutive frames, such as the 1^{st} and 2^{nd}-frames can be estimated just by substituting two vectors composed of this rotation-angle into Eq. (3), where two vectors represented by *O _{1}A_{1−0}* and

*O*for the 1

_{1}A_{1−1}^{st}and 2

^{nd}-frames, are given by (6.63

*cm*, −6.42

*cm*, −17.73

*cm*) and (6.13

*cm*, −6.10

*cm*, −18.03

*cm*), respectively. The rotational angle between the 1

^{st}and 2

^{nd}-frames, which is represented by

*θ*, is then calculated to be about 0.03

_{1−1}*rad*. In addition, the rotational axis can be obtained with Eq. (4). Here, the rotational axis of

*k*(

*x*,

_{k}*y*,

_{k}*z*) must have the same direction with that of the cross-product of the

_{k}*O*and

_{1}A_{1−0}*O*, thus it can be calculated just by calculating the cross-product of those two vectors of

_{1}A_{1−1}*O*and

_{1}A_{1−0}*O*. The coordinate values of the rotational axis,

_{1}A_{1−1}*k*(

*x*,

_{k}*y*,

_{k}*z*) is calculated to be (0.60

_{k}*cm*, 0.97

*cm*, −0.69

*cm*) from Eq. (5). Then, the rotational matrix for transforming the 3-D image from the 1

^{st}-frame (

*A*

_{1}) to the 2

^{nd}frame (

*A*

_{2}) can be calculated according to the matrix notation of the Rodrigues’ rotation formula of Eq. (6).

#### 3.4 3-D rotational-motion compensation based on the LSH

As mentioned above, the 3-D rotational-motion compensation can be carried out with a six-step process. First, as seen in Fig. 9(a), the OPH_{1} is directly propagated to the LPH_{1} based on Eq. (9), where *x _{B-LPH}*,

*y*,

_{B-LPH}*z*are estimated with the coordinate values of

_{B-LPH}*O*of Eq. (8). For the test video scenario, those coordinate values are calculated to be

_{1}*x*=

_{B-LPH}*x*=0,

_{1}*y*=

_{B-LPH}*y*= 0 and

_{1 }*z*=

_{B-LPH}*z*(

_{1}-*R-d*)=−2.70

*cm*, where

*R-d*is estimated to be 14.972

*cm*from Eq. (13) under the condition of

*R*= 15

*cm*.

Second, LPH_{1} is transformed into its corresponding LSH_{1} simply by being multiplied with the conversion table of Eq. (14), where distances between each pixel of the LSH_{1} and LPH_{1} can be calculated with Eq. (12) for the case of *R *= 15 *cm*, *d*=(*R*-(*R*-*d*)) = 15−14.972 = 0.028*cm*, *w = *1.56*cm* and *h = *0.97*cm*, respectively. With those parameters, the conversion table can be obtained, which is shown in Fig. 9(b).

Third, the LSH_{1} is then rotated with the rotational matrix calculated with the rotational axis and angle, which can be done just by directly shifting the LSH_{1} along the horizontal and vertical directions with their corresponding displacements, which enables saving the processing time when it is compared with that for rotating the LSH_{1} with the rotational matrix as seen in Fig. 10(a). Here, the horizontal and vertical displacements are estimated to be 0.375=(0.5/(20/15))≈463 pixels and 0.24=(0.32/(20/15))≈296 pixels respectively under the condition of that *r _{1}* is estimated as around 20

*cm*. Furthermore, the rotational operation of the LSH

_{1}may cause the non-overlapped region to be appeared, but the size of this region can be kept very small in comparison with that of the whole hologram pattern since the transverse arc of the LSH

_{1}becomes smaller than that of the 3-D object during the rotational operation. Here the rotational angle is calculated to be 0.03

*rad*, and radius of the LSH

_{1}is set to be 15

*cm*, thus the transverse arc length of the LSH

_{1}is calculated to be about 0.45

*cm*based on the arc length formula.

Fourth, the rotated LSH_{1} is then transformed back into the its corresponding version of the LPH_{1} with the same process of the second process except that the multiplication operation of the conversion table is just replaced with its division operation as seen in Fig. 10(b).

Fifth, as seen in Fig. 11(a) the hologram pattern for the blank region is calculated with the conventional NLUT, WRP and 1-D NLUT methods directly by considering the relative positions in the horizontal and vertical directions as well as the size of the blank region. The widths of the blank region in the horizontal and vertical directions are estimated to be 0.375*cm* and 0.24*cm*, which correspond to about 463 and 296 pixels, respectively. Thus, the size of the blank region becomes 48.78% (=(463×1200 + 296×1920)/(1920×1200)) of the whole hologram. Here, it must be noted that the ratio of the size between the blank and whole region of the hologram may depend on the geometric structure of the whole 3-D rotational model composed of the 3-D object and LSH_{1}.

Sixth, the rotated LPH_{1} is propagated back to the OPH, which is also the same operation with the first process except the propagation direction is just reversed. As mentioned above, the rotated version of the OPH_{1} has been obtained from the proposed rotational-motion compensation process between the 1^{st} and 2^{nd} frames, corresponding to the estimated version of the 2^{nd} frame OPH (OPH_{2}), as seen in Fig. 11(b).

#### 3.5 Error correction for the compensated hologram pattern

The similarity between the estimated and actual object images can be measured in terms of the SNR, where a threshold value is set to be 30 dB [42]. Figures 12(a-1), 12(a-2), 12(a-3) and 12(a-4) show the estimated, actual, difference (error) and error-corrected images of the 2^{nd} frame, respectively. Among them, the number of different object points is 531, which is around 16.16% of the actual image of the 2^{nd} frame with 3,286 object points. Here, the SNR value of the 2^{nd}-frame is calculated to be 28.26dB, so that an additional error-correction process was also carried out. In addition, Figs. 12(b-1), 12(b-2) and 12(b-3) represent the estimated OPH_{2}, calculated hologram pattern for the error image, and error-corrected OPH_{2}, respectively.

Here, the error correction can be done just by adding the calculated hologram pattern for the error image of Fig. 12(b-2) to the estimated OHP_{2} of Fig. 12(b-1), which resulted in an error-corrected OHP_{2} of Fig. 12(b-3). With this error-correction process, the estimated image Fig. 12(a-1) with the proposed method can be made accurately matched with that of its actual image of 12(a-2).

#### 3.6 Comparative performance analysis of the conventional CGH algorithms with and without employing the proposed method

Here, the operational performances of the conventional 1-D NLUT and its two modified versions employing the conventional CH-RMC and proposed SH-3DRMC schemes, which are called the original, CH-RMC-based, and SH-3DRMC-based 1-D NLUT methods, respectively, are comparatively discussed in detail in terms of the average number of calculated object points (ANCOP) and average calculation time (ACT). Table 1 shows the ANCOPs and ACTs for each of those original, CH-RMC and SH-3DRMC-based 1-D NLUT methods, respectively.

As seen in Table 1, the ANCOPs and ACTs of the original, CH-RMC, and SH-3DRMC-based 1-D NLUT methods have been estimated to be 3261, 5052, 527 and 8.47*s*, 15.16*s*, 5.79*s*, respectively. Here, the ANCOP and ACT the CH-RMC-based 1-D NLUT have been found to be rather increased by 54.92% and 78.65%, respectively, in comparison with those of the original 1-D NLUT. Since the conventional CH-RMC method was proposed only for the 2-D rotational-motion compensation, it may not be effective in dealing with this 3-D rotational motion compensation problem. In other words, the 3-D rotational motion composed of three rotational-motions along the *x*, *y* and *z-*axis, couldn’t be compensated just by rotating a cylindrical hologram only around the *y*-axis, which might result in rather increases of those ANCOP and ACT values even though the motion-compensation method was employed. On the other hand, the ANCOP and ACT values of the 1-D NLUT employing the proposed SH-3DRMC method have been found to be greatly decreased by 83.84% and 31.64%, respectively.

Furthermore Table 1 illustrates the detailed time costs for each of those six processes of the SH-3DRMC-based 1-D NLUT method, such as the parameter estimation (PE), propagation between the OPH and LPH (POL), conversion between the LPH and LSH (CLL), rotation of the LSH (RL), hologram calculation of the blank region (HCB) and error correction (EC). For comparison, the time costs for those six processes of the CH-RMC-based 1-D NLUT method are also included in Table 1.

As seen in Table 1, the calculation times taken for the parameter estimation and rotation of the LSH processes of the CH-RMC and SH-3DRMC-based 1-D NLUT methods have been calculated to be 43*ms*, 48*ms*, and 9*ms*, 11*ms*, respectively. In the CH-RMC-based method, two-dimensional parameter extraction and rotational processes are carried out, whereas three-dimensional processes of them are performed in the SH-3DRMC-based method, which results in slight differences in time cost between them. On the other hand, the calculation times taken for the propagation between the OPH and LPH and conversion between the LPH and LSH processes have been equally calculated to be 130*ms* and 12*ms*, respectively, in both of the CH-RMC and SH-3DRMC-based 1-D NLUT methods since the propagation operation between the OPH and LPH and conversion operation between the LPH and LSH are identically carried out in those methods.

In fact, the calculation times taken for those four processes of parameter estimation, propagation between the OPH and LPH, conversion between the LPH and LSH, and rotation of the LSH could be considered as the very trivial values in both methods in comparison with those of two other hologram calculation of the blank region and error correction processes as seen in Table 1. The calculation times taken for the hologram calculation of the blank region and error correction processes of the CH-RMC and SH-3DRMC-based 1-D NLUT methods have been found to be 1.89*s*, 12.94*s* and 3.90*s*, 1.55*s*, respectively, which correspond to 12.47%, 85.33% and 67.90%, 26.82% of the total time costs, respectively.

For the case of the hologram calculation of the blank region, only the blank regions due to the horizontal shifts of the LCHs are to be calculated in the CH-RMC-based method, whereas in the SH-3DRMC-based method, those blank regions caused by both of the horizontal and vertical shifts of the LSHs are to be calculated, which results in some differences in their time costs. Moreover, as seen in Table 1, the ANCOP of the CH-RMC-based method has been increased up to 154.92%, but decreased down to 16.16% in the SH-3DRMC-based method in comparison with those of their original methods, which leads to some differences in their time costs since the time taken for the error correction process depends on the number of calculated object points.

As mentioned above, Table 1 shows that the ACT of the CH-RMC-based 1-D NLUT has been increased up to 178.65%, whereas the ACT of the SH-3DRMC-based 1-D NLUT has been decreased down to 68.36% in comparison with that of the original 1-D NLUT. These results reveal that the computational speed of the conventional 1-D NLUT method can be greatly enhanced in case the proposed SH-3DRMC scheme is employed. In addition, here it is also noted that this proposed scheme can be applied to any conventional CGH algorithm for enhancing its computational speed.

Table 2 also illustrates the additional experimental results for two other NLUT and WRP methods employing the conventional CH-RMC and proposed SH-3DRMC methods. As seen in Table 2, the ACTs of the original NLUT and WRP methods, as well as those of their CH-RMC and SH-3DRMC-based versions have been compared. Just like the case of the 1-D NLUT method, the ACTs of the CH-RMC-based NLUT and WRP methods have been increased up 179.57% and 181.98%, whereas those of the SH-3DRMC-based NLUT and WRP methods have been greatly increased by 65.25% and 58.63%, respectively, in comparison with those of their original versions.

In other words, the averaged computational speed for three conventional NLUT, WRP and 1-D NLUT methods employing the proposed SH-3DRMC has been found to be greatly increased by 64.08% in comparison with their original methods, which means an enhancement of 35.92% in computational speed. These good experimental results confirm the applicability of the proposed method to any conventional CGH algorithm and its feasibility in the practical application.

#### 3.7 Reconstruction of the holographic 3-D video

Figures 13(a_{1})–13(a_{4}) and 13(b_{1})–13(b_{4}) show four 3-D scene images of the 1^{st}, 11^{th}, 21^{st} and 30^{th} frames, which are computationally and optically reconstructed from the holographic videos generated with the SH-3DRMC-based NLUT method. The fixed ‘*Sphere*’ images are reconstructed at the depth of 323*mm*, while the ‘*Airplane*’ images are reconstructed at the depth planes of 348 *mm*, 325 *mm*, 328 *mm* and 333 *mm* measured from the hologram plane, for each of the 1^{st}, 11^{th}, 21^{st} and 30^{th}-frames of the test video scenario, respectively. 30 frames of the computationally and optically reconstructed 3-D scenes which are compressed into a video files of Visualization 1 and Visualization 2, respectively, are also included in Fig. 13.

As seen in the optically-reconstructed input scenes, the ‘*Airplane*’ image of the 21^{st}-frame of Fig. 13(b_{3}) looks blurred more than those of the 11^{st} and 30^{th}-frames of Figs. 13(b_{2}) and 13(b_{4}), which results from the fact that 3-D scenes have been optically reconstructed being focused on the fixed ‘*Sphere*’ image. That is, the depth distance between the ‘*Airplane*’ and ‘*Sphere*’ objects at the 11^{th}-frame gets a little bit larger than those of the 21^{st} and 30^{th}-frames, which might cause the ‘*Airplane*’ image of the 21^{st}-frame to be out of focused a little bit more.

Successful experimental results on the reconstruction of those holographic videos for the test video scenario may finally confirm the feasibility of the proposed method. However, in practice, the proposed method may not be applied to the special case where an object moves along the trajectory with a continuously-varying curvature since this moving trajectory of the object cannot be properly segmented into a set of local arcs.

## 4. Conclusions

In this paper, a new SH-3DRMC scheme has been proposed for accelerated generation of the holographic video of a 3-D object freely-moving in space with many curvatures. All those rotational motions of the object made on each arc could be compensated with this proposed method, which results in a great reduction of computational complexity of any conventional CGH algorithm. Experiments with a test scenario show that the computational speed for those three conventional NLUT, WRP and 1-D NLUT methods employing the SH-3DRMC scheme have been found to be enhanced by 35.92%, on the average, in comparison with those of their original methods, which finally confirms the feasibility of the proposed method.

## Funding

MSIT (Ministry of Science and ICT), Korea, under the ITRC support program (IITP-2017-01629) supervised by the IITP, Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (2018R1A6A1A03025242).

## Disclosures

The authors declare no conflicts of interest.

## References

**1. **D. Gabor, “A new microscopic principle,” Nature **161**(4098), 777–778 (1948). [CrossRef]

**2. **C. J. Kuo and M. H. Tsai, * Three-Dimensional Holographic Imaging* (John Wiley & Sons, 2002).

**3. **T.-C. Poon, * Digital Holography and Three-dimensional Display* (Springer Verlag, 2007).

**4. **X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in Proceedings of IEEE Conference on Information Photonics and Optical Communications (IEEE, 2011), pp. 1–4.

**5. **F. Yaras, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. **48**(34), H48–H53 (2009). [CrossRef]

**6. **Ag, HoloEye Photonics, “GAEA 10 Megapixel Phase Only Spatial Light Modulator (Reflective),” http://holoeye.com/spatial-light-modulators/gaea-4k-phase-only-spatial-light-modulator/.

**7. **M. Makowaki, I. Ducin, K. Kakarenko, A. Kolodziejczyk, A. Siemion, A. Siemion, J. Suszek, M. Sypek, and D. Wojnowski, “Efficient image projection by Fourier electroholography,” Opt. Lett. **36**(16), 3018–3020 (2011). [CrossRef]

**8. **H. Nakayama, N. Takada, Y. Ichihashi, S. Awazu, T. Shimobaba, N. Masuda, and T. Ito, “Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,” Appl. Opt. **49**(31), 5993–5996 (2010). [CrossRef]

**9. **K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express **19**(10), 9086–9101 (2011). [CrossRef]

**10. **T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. **52**(1), A201–A209 (2013). [CrossRef]

**11. **T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express **21**(26), 32019–32031 (2013). [CrossRef]

**12. **M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging **2**(1), 28–34 (1993). [CrossRef]

**13. **T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calclulation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. **34**(20), 3133–3135 (2009). [CrossRef]

**14. **D. Arai, T. Shimobaba, K. Murano, Y. Endo, R. Hirayama, D. Hiyama, T. Kakue, and T. Ito, “Acceleration of computer-generated holograms using tilted wavefront recording plane method,” Opt. Express **23**(2), 1740–1747 (2015). [CrossRef]

**15. **T. Tommasi and B. Bianco, “Frequency analysis of light diffraction between rotated planes,” Opt. Lett. **17**(8), 556–558 (1992). [CrossRef]

**16. **D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express **23**(3), 2863–2871 (2015). [CrossRef]

**17. **K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A **20**(9), 1755–1762 (2003). [CrossRef]

**18. **Y. Pan, X. Xu, S. Solanki, X. Liang, R. Tanjung, C. Tan, and T. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express **17**(21), 18543–18555 (2009). [CrossRef]

**19. **H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” in Digital Holography and Three-Dimensional Imaging, 2009 OSA Technical Digest Series (Optical Society of America, 2009), pp. DWC4.

**20. **H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE **3956**, 48–55 (2000). [CrossRef]

**21. **N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express **21**(7), 9192–9197 (2013). [CrossRef]

**22. **Y. Kimura, R. Kawaguchi, T. Sugie, T. Kakue, T. Shimobaba, and T. Tto, “Circuit design of special-purpose computer for holography HORN-8 using eight Virtex-5 FPGAs,” in Proc. 3D Syst. Appl.S3–2 (2015).

**23. **T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, and Y. Endo, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. **1**(4), 254–259 (2018). [CrossRef]

**24. **H. G. Kim and Y. M. Ro, “Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object,” Opt. Express **25**(24), 30418–30427 (2017). [CrossRef]

**25. **T. Shimobaba and T. Ito, “Fast generation of computer-generated holograms using wavelet shrinkage,” Opt. Express **25**(1), 77–87 (2017). [CrossRef]

**26. **D. Blinder and P. Schelkens, “Accelerated computer generated holography using sparse bases in the STFT domain,” Opt. Express **26**(2), 1461–1473 (2018). [CrossRef]

**27. **P. W. M. Tsang and T. C. Poon, “Fast generation of digital holograms based on warping of the wavefront recording plane,” Opt. Express **23**(6), 7667–7673 (2015). [CrossRef]

**28. **Z. Zeng, H. Zheng, Y. Yu, and A. K. Asundi, “Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm,” Opt. Lasers. Eng. **93**, 47–54 (2017). [CrossRef]

**29. **S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. **47**(19), D55–D62 (2008). [CrossRef]

**30. **Y. Zhao, K. C. Kwon, M. U. Erdenebat, M. S. Islam, S. H. Jeon, and N. Kim, “Quanlity enhancement and GPU acceleration for a full-color holographic system using a relocated point cloud gridding method,” Appl. Opt. **57**(15), 4253–4262 (2018). [CrossRef]

**31. **M.-W. Kwon, S.-C. Kim, S.-E. Yoon, Y.-S. Ho, and E.-S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express **23**(3), 2101–2120 (2015). [CrossRef]

**32. **M.-W. Kwon, S.-C. Kim, and E.-S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. **55**(3), A22–A31 (2016). [CrossRef]

**33. **S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. **47**(32), 5986–5995 (2008). [CrossRef]

**34. **S.-C. Kim, X.-B. Dong, M.-W. Kwon, and E.-S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express **21**(9), 11568–11584 (2013). [CrossRef]

**35. **X.-B. Dong, S.-C. Kim, and E.-S. Kim, “MPEG-based novel-look-up-table method for accelerated computation of digital video holograms of three-dimensional objects in motion,” Opt. Express **22**(7), 8047–8067 (2014). [CrossRef]

**36. **X.-B. Dong, S.-C. Kim, and E.-S. Kim, “Three-directional motion compensation-based novel-look-up-table for video hologram generation of three-dimensional objects freely maneuvering in space,” Opt. Express **22**(14), 16925–16944 (2014). [CrossRef]

**37. **H. K. Cao and E. S. Kim, “Full-scale one-dimensional NLUT method for accelerated generation of holographic videos with the least memory capacity,” Opt. Express **27**(9), 12673–12691 (2019). [CrossRef]

**38. **K. Matsushima, “Formulation of the rotational transformation of wave fields and their application to digital holography,” Appl. Opt. **47**(19), D110–D116 (2008). [CrossRef]

**39. **D. Blinder, C. Schretter, and P. Schelkens, “Global motion compensation for compressing holographic videos,” Opt. Express **26**(20), 25524–25533 (2018). [CrossRef]

**40. **H. K. Cao, S. F. Lin, and E. S. Kim, “Accelerated generation of holographic videos of 3-D objects in rotational motion using a curved hologram-based rotational-motion compensation method,” Opt. Express **26**(16), 21279–21300 (2018). [CrossRef]

**41. **Wikipedia, ‘Rodrigues’ rotation formula’, https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula.

**42. **ISO, I, “12232: Photography-Electronic Still Picture Cameras: Determination of ISO Speed,” International Organization for Standardization, Geneva, Switzerland (1997).