Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reducing the memory usage of computer-generated hologram calculation using accurate high-compressed look-up-table method in color 3D holographic display

Open Access Open Access

Abstract

In this paper, we propose an accurate high-compressed look-up-table method that uses less memory to generate the hologram. In precomputation, we separate the longitudinal modulation factors and only calculate the basic horizontal and vertical factors. Therefore, we obtain other horizontal and vertical modulation factors of object points by simply shifting the basic horizontal and vertical modulation factors while computing holograms. We perform numerical simulations and optical experiments to verify the proposed method. Numerical simulation results show that the proposed method has the least memory usage, the fastest computation time and no distortion. The optical experimental results are in accord with the numerical simulation results. The proposed method is simple and effective to calculate computer-generated holograms for color dynamic holographic display with high speed, less memory usage and high accuracy that could be applied in the holographic field in the future.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holographic display [1], providing all depth information for human eyes [24], is regarded as the ultimate three-dimensional (3D) display technology and has attracted more attention in recent years. Computer-generated hologram (CGH) has been considered to be the most promising method to realize the real-time 3D display by recording holograms digitally. However, there are two main problems which restrict the development of holographic display. One is the heavy computational load [57] and the other is the quality of reconstructed images [8,9].

Until now, several methods have been studied for CGH computation such as point-based method [1014], polygon-based [1519] method and layer-based method [2022]. Among these methods, point-based method is a simple and widely used method that can achieve 3D images with high quality. However, the calculation speed is quite slow because of point-to-point online computation (the process of computing CGHs). Many investigations have been presented to speed up the calculation [2332]. Look-up-table (LUT) [6] method precomputed all possible fringe patterns (FPs) and stored them in a table, and in online computation, FPs for each object point can be generated just by reading out the corresponding data from the table. However, it needs a huge memory to store the precomputed FP of all object points. Novel look-up-table (N-LUT) method was proposed [7] to reduce the memory usage of LUT, where the objects are decomposed into several sliced two-dimensional (2D) object planes, and FPs of the center object points on each sliced plane are precalculated, then FPs for other points can be obtained by simply shifting the precalculated ones. Split look-up-table (S-LUT) method was proposed to further reduce memory usage [10], where FPs of object points on each slice can be generated by the split horizontal and vertical modulation factors. While the horizontal and the vertical modulation factors in S-LUT contain depth information, which causes the memory usage increasing rapidly with the increment of the depth layers. Compressed look-up-table (C-LUT) [11] method has been developed to reduce the memory usage of S-LUT, where the horizontal and vertical modulation factors are compressed into one-dimensional (1D) data arrays. Therefore, the memory usage is unchanged with the increment of the depth layers. While C-LUT method is based on an approximation where the size of the reconstructed images is much smaller than the distance between the objects and the holograms, causing the distortion of the reconstructed images. The distortion is enhanced with the increment of object depth, which greatly decays the quality of 3D reconstructed images. Accurate compressed look-up-table (AC-LUT) method [33] was developed to reduce the large memory usage of S-LUT and alleviate the distortion of C-LUT without sacrificing the computational speed. However, the memory usage is required to be in the order of Kilobytes (Kbs) to speed up the calculation for the shared memory in graphics processing unit (GPU), while the memory usage of AC-LUT method is still in the order of megabytes (MBs). Therefore, the memory usage needs to be reduced further.

Here, we propose accurate high compressed look-up-table (AHC-LUT) method which is based on Fresnel diffraction and LUT to obtain accurate reconstructed images with less memory usage (in the order of Kilobytes) and faster speed. Numerical simulations and optical experiments are performed to verify the validity of the proposed method.

2. Principles and methods

In point-based method, a 3D object is decomposed of a large number of points, which are regarded as self-luminous sources and emit spherical waves irradiating uniformly on the hologram plane. The complex amplitude distribution on the hologram plane can be obtained by the superposition of spherical waves of each point. The field distribution in the hologram plane after transmitting all point sources in the free space can be described as

$$H(x{^{\prime}_p},y{^{\prime}_q}) = \sum\limits_{j = 0}^{N - 1} {{A_j}\exp [i(k{r_j} + {\phi _j})]}$$
Where
$${r_j} = {({{{(x{^{\prime}_p} - {x_j})}^2} + {{(y{^{\prime}_q} - {y_j})}^2} + {{(d - {z_j})}^2}} )^{1/2}}$$
$H(x{^{\prime}_p},y{^{\prime}_q})$ is the complex amplitude on hologram plane, N is the number of object points. $({x_j},{y_j},{z_j})$ and ${A_j}$ are the coordinate and amplitude of object point j, respectively. $\lambda$ is wavelength and $k = 2\pi /\lambda$ is wave number. d is the distance between the object plane and hologram plane. ${\phi _j}$ is the random phase, which is distributed between $0$ and $2\pi$.

In Fresnel region [24], Eq. (1) can be written as

$$H({x^{\prime}}_p,{y^{\prime}}_q) = \sum\limits_{j = 1}^N {{A_j}} \exp \{ ik[(d - {z_j}) + \frac{{{{({x^{\prime}}_p - {x_j})}^2} + {{({y^{\prime}}_q - {y_j})}^2}}}{{2(d - {z_j})}}]\}$$

Equation (3) can be simplified as

$$H({x^{\prime}}_p,{y^{\prime}}_q) = \sum\limits_{j = 1}^N {{A_j}} \exp [ik(d - {z_j})]{\{ \exp [\frac{{{{({x^{\prime}}_p - {x_j})}^2} + {{({y^{\prime}}_q - {y_j})}^2}}}{2}]\} ^{(\frac{{ik}}{{d - {z_j}}})}}$$
We split the vertical and horizontal information, and the Eq. (4) can be written as:
$$H({x^{\prime}}_p,{y^{\prime}}_q) = \sum\limits_{j = 1}^N {{A_j}} \exp [ik(d - {z_j})]{\{ \exp [\frac{{{{({x^{\prime}}_p - {x_j})}^2}}}{2}]\cdot \exp [\frac{{{{({y^{\prime}}_q - {y_j})}^2}}}{2}]\} ^{(\frac{{ik}}{{d - {z_j}}})}}$$
We define $H({x^{\prime}}_p,{x_j}) = \exp [{({x^{\prime}}_p - {x_j})^2}/2]$ as the horizontal modulation factor, $V({y^{\prime}}_q,{y_j}) = \exp [{({y^{\prime}}_q - {y_j})^2}/2]$ as the vertical modulation factor, ${L_1}({z_j},\lambda ) = \exp [ik(d - {z_j})]$ and${L_2}({z_j},\lambda ) = ik/(d - {z_j})$ as the longitudinal and wavelength modulation factors, which contains wavelength and depth information. Here, $H({x^{\prime}}_p,{x_j})$ and $V({y^{\prime}}_q,{y_j})$ are real numbers.

So Eq. (5) can be simplified as

$$H({x^{\prime}}_p,{y^{\prime}}_q) = \sum\limits_{j = 1}^N {{A_j}} {L_1}({z_j},\lambda ){(H({x^{\prime}}_p,{x_j})V({y^{\prime}}_q,{y_j}))^{{L_2}({z_j},\lambda )}}$$
For ${N_{xy}}$ object points falling on the same layer of the 3D object, they have the same longitudinal modulation factors ${L_1}({z_j})$ and ${L_2}({z_j})$. So Eq. (6) can be written as
$$H({x^{\prime}}_p,{y^{\prime}}_q) = \sum\limits_{{j_z} = 1}^{{N_z}} {[\sum\limits_{{j_{xy}} = 1}^{{N_{xy}}} {{A_{{j_{xy}}}}{{(H({x^{\prime}}_p,{x_{{j_{xy}}}})V({y^{\prime}}_q,{y_{{j_{xy}}}}))}^{{L_2}({z_{{j_z}}},\lambda )}}} } ]{L_1}({z_{{j_z}}},\lambda )$$
Where ${j_z}(\ =\ 1,2, \cdots ,{N_z})$ is the sequence number in the 2D image plane of the 3D object, ${j_{xy}}(\ =\ 1,2, \cdots ,{N_{xy}})$ is the sequence number of points in each 2D image plane.

For ${N_y}$ object points falling on the same vertical line of each 2D image plane, they have the same horizontal modulation factor $H({x^{\prime}}_p,{x_j})$, so Eq. (7) can be written as

$$H({x^{\prime}}_p,{y^{\prime}}_q) = \sum\limits_{{j_z} = 1}^{{N_z}} {{\{\ }\sum\limits_{{j_x} = 1}^{{N_x}} {[\sum\limits_{{j_y} = 1}^{{N_y}} {{A_{{j_y}}}V{{({y^{\prime}}_q,{y_{{j_y}}})}^{{L_2}({z_{{j_z}}},\lambda )}}} ]} \,H{{({x^{\prime}}_p,{x_{{j_x}}})}^{{L_2}({z_{{j_z}}},\lambda )}}{\}\ }} {L_1}({z_{{j_z}}},\lambda )$$
Where ${j_x},\ {j_y}\ (\ =\ 1,2, \cdots {N_x}\ ,\ =\ 1,2, \cdots {N_y})$ are the sequence number of points on the horizontal and vertical line in each 2D image plane.

We define $H({x^{\prime}}_p,{x_m}) = \exp [{({x^{\prime}}_p - {x_\textrm{m}})^2}/2]$ as the basic horizontal modulation factor, $V({y^{\prime}}_q,{y_m}) = \exp [{({y^{\prime}}_q - {y_m})^2}/2]$ as the basic vertical modulation factor. ${x_m}$ and ${y_m}$ are the middle points of the row and the column in the first depth layer of 3D object, respectively. Therefore, other horizontal and vertical modulation factors of object points can be obtained by simply shifting the basic horizontal and vertical modulation factors.

Therefore, Eq. (8) can be written as

$$H({x^{\prime}}_p,{y^{\prime}}_q) = \sum\limits_{{j_z} = 1}^{{N_z}} {} \{ \sum\limits_{{j_x} = 1}^{{N_x}} {} [\sum\limits_{{j_y} = 1}^{{N_y}} {} {A_{{j_y}}}V{({y^{\prime}}_q - {y_{{j_y}}},{y_m})^{{L_2}({z_{{j_z}}},\lambda )}}]\ H{({x^{\prime}}_p - {x_{{j_x}}},{x_m})^{{L_2}({z_{{j_z}}},\lambda )}}\} {L_1}({z_{{j_z}}},\lambda )$$
In the proposed method, the basic horizontal and vertical modulation factors undergo the shifting processes, and other horizontal and vertical modulation factors are finally extracted from basic horizontal and vertical modulation factors. Thus, if the sizes of basic horizontal and vertical modulation factors are too small, they cannot fill the predetermined size of the CGH. To avoid this situation, the resolution of the basic horizontal and vertical modulation factors must be increased. The resolution of basic horizontal and vertical modulation factors are affected by the ratio of the object sampling interval to the pixel size of the CGH. Here, $\Delta x$ and $\Delta y$ denotes the ratio in horizontal and vertical direction. Therefore, the resolution of basic horizontal and vertical modulation factors can be given as:
$$\begin{array}{l} \textrm{Resolution of }H({x^{\prime}}_p,{x_m})\ : p + {N_x}\Delta x\\ \textrm{Resolution of }V({y^{\prime}}_q,{y_m})\ : q + {N_y}\Delta y \end{array}$$
Where $p$ and $q$ are the horizontal and vertical resolution in hologram plane, respectively.

Hence, the resolution of offline (precalculation) LUT is $p + {N_x}\Delta x + q + {N_y}\Delta y$.

According to Eqs. (9) and (10), the basic modulation factors for $H({x^{\prime}}_p,{x_m})$ and $V({y^{\prime}}_q,{y_m})$ are 1D data arrays and only contain real numbers in the offline computation, which leads to less memory usage.

Therefore, the proposed method can be divided into two steps, as shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Diagram of the proposed method to generate the CGH.

Download Full Size | PDF

Where .^ is exponent arithmetic.

The first step is to calculate the basic modulation factors $H({x^{\prime}}_p,{x_m})$ and $V({y^{\prime}}_q,{y_m})$, and store them in LUT during the offline computation. The step can be listed as

$$\begin{array}{l} //\textrm{offline computation},\textrm{ to build a LUT}\\ \textrm{For} \,{x^{\prime}}_p + {N_x}\Delta x\textrm{ of hologram and }{x_m}\textrm{ of 2D image planes}\\ \quad H({x^{\prime}}_p,{x_m}) = \exp [{({x^{\prime}}_p - {x_m})^2}/2]\\ \textrm{End}\\ \textrm{For} \,{y^{\prime}}_q + {N_y}\Delta y\textrm{ of hologram and }{y_m}\textrm{ of 2D image planes}\\ \quad V({y^{\prime}}_q,{y_m}) = \exp [{({y^{\prime}}_p - {y_m})^2}/2]\\ \textrm{End} \end{array}$$
In offline computation, basic modulation factors $H({x^{\prime}}_p,{x_m})$ and $V({y^{\prime}}_q,{y_m})$ does not contain wavelength information. Therefore, for different wavelength, they have same basic modulation factors.

The second step is to read out the modulation factors from LUT and generate the holograms. The step can be listed as

$$\begin{array}{l} //\textrm{online computation},\textrm{ to read out the data from LUT and generate the hologram }\\ \textrm{For each }{\textrm{z}_j}\\ \quad\textrm{ For each }{\textrm{x}_j}\textrm{ that }{\textrm{A}_j} \ne 0\textrm{ (j = 0,1} \ldots {\textrm{N}_\textrm{x}} - 1)\\ \quad\quad\textrm{ For each }{\textrm{y}_q}^{\prime}\textrm{ of hologram and each }{\textrm{y}_j}\textrm{ that }{\textrm{A}_j} \ne 0\textrm{ and }\\ \quad\quad\textrm{ have the same }{\textrm{x}_j}\textrm{(j = 0,1} \ldots {\textrm{N}_y} - 1)\\ \quad\quad\quad\textrm{ V} = {A_j}\ast (V{({y_q}^{\prime} - {y_j},{y_m})^{{L_2}({z_{{j_z}}},\lambda )}}) + V;\ \\ \quad\quad\textrm{ End}\\ \quad\quad\textrm{ For each }{\textrm{x}_p}^{\prime},{y_q}^{\prime}\textrm{ of hologram}\\ \quad\quad\quad HV = V\ast (H{({x_p}^{\prime} - {x_j},{x_m})^{{L_2}({z_{{j_z}}},\lambda )}}) + HV;\\ \quad\quad\textrm{ End}\\ \quad\textrm{ End}\\ \quad\textrm{ For each }{\textrm{x}_p}^{\prime},{y_q}^{\prime}\textrm{ of hologram}\\ \quad\quad H({x_p}^{\prime},{y_q}^{\prime}) = HV\ast {L_1}({z_{{j_z}}},\lambda ) + H({x_p}^{\prime},{y_q}^{\prime});\\ \quad\textrm{ End}\\ \textrm{End} \end{array}$$
In online computation, for different wavelength, we generate the holograms by using corresponding ${L_1}({z_{{j_z}}},\lambda )$ and ${L_2}({z_{{j_z}}},\lambda )$.

To illustrate the simplicity of proposed method, we compare the computational complexity and the memory usage by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods. M denotes the memory usage of real numbers. From Table 1, we can see that these four methods have the same online computational complexity and proposed method has the least offline computational complexity and memory usage among these four methods. Particularly, less memory usage will improve the efficiency of reading out the data online. It is noteworthy that the memory usage keeps unchanged in color holographic display in the proposed method as the basic modulation factors don’t contain wavelength information.

Tables Icon

Table 1. Complexity and memory usage by using S-LUT, C-LUT, AC-LUT, and AHC-LUT methods

To illustrate the computation precision of proposed method, we compare the reconstructed distortion among these four methods theoretically. From Table 2, we can see that there is no approximate computation in horizontal and vertical directions for proposed method. Therefore, the reconstructed objects have no distortion in both directions.

Tables Icon

Table 2. Distortion ratio by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods

3. Numerical simulation and emulation

To demonstrate the feasibility of proposed method, we perform the numerical simulations. We reconstruct 3D scene located at different distances and the parameters used are listed in Table 3. Our program is run by a computer with MATLAB on Core i7-7700, 3.6 GHz, and 8G RAM.

Tables Icon

Table 3. CGH computation parameters

As shown in Fig. 2, in S-LUT method, the offline computation time increases with the increment of the depth layers and it keeps in the order of seconds (s). In C-LUT, AC-LUT and AHC-LUT methods, the offline computation time keeps unchanged with the increment of the depth layers and it reduces to the order of milliseconds (ms). The offline computation time of AHC-LUT method is 0.8ms, which is about 80&46 times faster than C-LUT and AC-LUT methods, respectively. Therefore, proposed method has the least offline computation time among these four methods and it accords with theoretical analysis above. By comparing Figs. 2(a) and 2(b), we can see that the S-LUT and C-LUT methods spend 3 times offline computation time in color LUTs generation. However, the offline computation time of proposed method keeps unchanged in monochrome and color LUTs generation as the basic modulation factors don’t contain wavelength information.

 figure: Fig. 2.

Fig. 2. Comparison of offline computation time by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods. Figure 2(a) is the time of monochrome LUTs, and Fig. 2(b) is the computation time of color LUTs.

Download Full Size | PDF

As shown in Table 4, in S-LUT method, the memory usage increases with the increment of the object depth layers and it keeps in the order of gigabytes (GBs). In C-LUT method, the memory usage keeps unchanged with the increment of the object depth layers and reduces to the order of Mbs. In AC-LUT method, the memory usage reduces further but still keeps to the order of Mbs. While in AHC-LUT method, the memory usage reduces to the order of Kbs. Therefore, proposed method has the least memory usage among these four methods and it accords with theoretical analysis above. It is noteworthy that the memory usage of proposed method keeps unchanged with the increment of the object depth layers and generating the color LUTs as our basic modulation factors don’t have depth and wavelength information.

Tables Icon

Table 4. Memory usage by using S-LUT, C-LUT, AC-LUT and AHC-LUT method

As shown in Fig. 3, C-LUT method is a bit slower than AC-LUT method and AHC-LUT method is the fastest among these four methods. However, large memory usage restricts the online computation time for S-LUT method. By comparing Figs. 3(a) and 3(b), we can see that the online computation time increases about 3 times when we generate the color holograms.

 figure: Fig. 3.

Fig. 3. Comparison of online computation time by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods. Figure 3(a) is the time of monochrome holograms, and Fig. 3(b) is the time of color holograms.

Download Full Size | PDF

As shown in Table 5, AHC-LUT, S-LUT and AC-LUT methods have no distortion in horizontal and vertical directions. Therefore, proposed method can obtain accurate reconstructed images. However, large object depth restricts the accuracy for C-LUT method, the distortion increases with the increment of the object depth.

Tables Icon

Table 5. Distortion ratio by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods

In simulation, we reconstruct ‘B’, ‘I’, ‘T’ of the same size located at different distances to verify the feasibility of proposed method. From Fig. 4, we can see that the size of the reconstructed images is accurate when we use S-LUT, AC-LUT and AHC-LUT methods. However, for C-LUT method, the distortion is obvious with the increment of the object depth, which can be seen from Figs. 4(f) and 4(j).

 figure: Fig. 4.

Fig. 4. Numerical simulation results by using different methods focused on different distances. Figures 4(a), 4(e) and 4(i), Figs. 4(b), 4(f) and 4(j), Figs. 4(c), 4(g) and 4(k), Figs. 4(d), 4(h) and 4(l) are reconstructed results by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods, respectively. Here, Figs. 4(a)–4(d), Figs. 4(e)–4(h), Figs. 4(i)–4(l) are focused on 200 mm, 250 mm, 300 mm, respectively.

Download Full Size | PDF

In order to verify the feasibility of proposed method further, we reconstruct 3D scene located at different distances. Figures 5(a) and 5(b) are monochrome numerical simulation results and Figs. 5(c) and 5(d) are color numerical simulation results. Figures 5(a) and 5(c) show the camera focusing on the teapot at 200 mm and Figs. 5(b) and 5(d) show the camera focusing on the pyramid at 220mm. When camera focuses on the teapot, the pyramid blurs, while it focuses on the pyramid, the teapot blurs. These changes demonstrate the feasibility of proposed method, which can reconstruct 3D images with correct depth information.

 figure: Fig. 5.

Fig. 5. Numerical simulation results by using AHC-LUT method. Figures 5(a) and 5(b) are the monochrome results focused on 200mm and 220mm, respectively. Figures 5(c) and 5(d) are the color results focused on 200mm and 220mm, respectively.

Download Full Size | PDF

4. Optical experiments

To demonstrate the feasibility of proposed method, we perform the optical experiments. The size of object is 400 × 400. The reconstructed image is projected by the phase-only spatial light modulator (SLM) whose pixel size is 8µm. The resolution of the reconstructed image is 1080 × 1080 and it is captured by a CCD (Lumenera camera INFINITY 4-11C). The wavelength of the reconstructed light is 532 nm and the reconstructed distance is 200 mm. In the optical experiments, the zero-order beam elimination method [34] is adopted to improve the quality of the reconstructed images. Besides, the temporal multiplexing method is utilized to generate the color holographic display, where red, green, and blue components are reconstructed and formed into color objects by using time integration. The schematic of the optical experimental setup for reconstruction is shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Setup of the holographic display system: SLM is the spatial light modulator, PC is the personal computer, L1 and L2 are the Fourier transform lens.

Download Full Size | PDF

From Fig. 7 we can see that the size of the reconstructed images is accurate using S-LUT, AC-LUT and AHC-LUT methods. However, when we use C-LUT method, the distortion increases with the increment of the object depth, which can be seen from Figs. 7(f) and 7(j). Optical experimental results show the accuracy of proposed method to achieve 3D reconstructed images and match well with numerical simulation results.

 figure: Fig. 7.

Fig. 7. Optical experimental results by using different methods focused on different distances. Figures 7(a), 7(e) and 7(i), Figs. 7(b), 7(f) and 7(j), Figs. 7(c), 7(g) and 7(k), Figs. 7(d), 7(h) and 7(l) are reconstructed results by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods, respectively. Here, Figs. 7(a)–7(d), Figs. 7(e)–7(h), Figs. 7(i)–7(l) are focused on 200mm, 250mm, 300mm, respectively.

Download Full Size | PDF

Figure 8 shows the optical reconstructed 3D scenes by using proposed method. Figures 8(a) and 8(b) show the monochrome optical experimental results, and Figs. 8(c) and 8(d) show color optical experimental results. The optical experimental images focusing on the teapot reconstructed at 200 mm are shown in Figs. 8(a) and 8(c), and the images focusing on the pyramid reconstructed at 220 mm are shown in Figs. 8(b) and 8(d). The optical experimental results show that proposed method keeps the depth information well and the optical experimental results are in accord with numerical simulation results.

 figure: Fig. 8.

Fig. 8. Optical experimental results by using AHC-LUT method. Figures 8(a) and 8(b) are the monochrome results focused on 200mm and 220mm, respectively. Figures 8(c) and 8(d) are the color results focused on 200mm and 220mm, respectively.

Download Full Size | PDF

5. Conclusion

We propose a computation method based on Fresnel diffraction theory and LUT to reduce the memory usage of S-LUT and alleviate the distortion of C-LUT without sacrificing the computational speed. In offline computation, we compress the horizontal and the vertical factors to 1D data arrays, separate the longitudinal modulation factors from them and only calculate the basic horizontal and vertical factors. Therefore, we can achieve the least memory usage which improves the efficiency of reading out the data online. In online computation, we obtain other horizontal and vertical modulation factors of object points by simply shifting the basic horizontal and vertical modulation factors and the total online calculation time is faster than the existing methods. Numerical simulations and optical experiments are performed to verify proposed method, and the results match well with each other. Our following works will focus on the parallel computing by using GPU and the proposed method shows great potential for its small memory requirement. It is expected that our method is a promising method for realizing dynamic 3D holographic display with less memory usage, high speed, and high-quality images reconstruction and will have great potential to be applied in the holographic display or various other optical diffraction areas in the future.

Funding

Newton Fund; National Basic Research Program of China (973 Program) (2017YFB1002900); National Natural Science Foundation of China (61420106014, 61575024).

References

1. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]  

2. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array ofspatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]  

3. T. Kozacki, M. Kujawińska, G. Finke, W. Zaperty, and B. Hennelly, “Holographic capture and display systems in circular configurations,” J. Disp. Technol. 8(4), 225–232 (2012). [CrossRef]  

4. F. Yaraş, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19(10), 9147–9156 (2011). [CrossRef]  

5. A. D. Stein, Z. Wang, and J. J. S. Leigh, “Computer-generated holograms: a simplified ray-tracing approach,” Comput. Phys. 6(4), 389–392 (1992). [CrossRef]  

6. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

7. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

8. T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015). [CrossRef]  

9. H. Pang, J. Wang, A. Cao, and Q. Deng, “High-accuracy method for holographic image projection with suppressed speckle noise,” Opt. Express 24(20), 22766–22776 (2016). [CrossRef]  

10. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using SLUT on GPU,” Opt. Express 17(21), 18543–18555 (2009). [CrossRef]  

11. J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013). [CrossRef]  

12. S. C. Kim and E. S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. 48(6), 1030–1041 (2009). [CrossRef]  

13. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computergenerated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef]  

14. S. C. Kim, J. H. Yoon, and E. S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. 47(32), 5986–5995 (2008). [CrossRef]  

15. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Improved full analytical polygon-based method using Fourier analysis of the three-dimensional affine transformation,” Appl. Opt. 53(7), 1354–1362 (2014). [CrossRef]  

16. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). [CrossRef]  

17. Y. Pan, Y. Wang, J. Liu, X. Li, J. Jia, and Z. Zhang, “Analytical brightness compensation algorithm for traditional polygon-based method in computer-generated holography,” Appl. Opt. 52(18), 4391–4399 (2013). [CrossRef]  

18. J. Park, S. Kim, H. Yeom, H. Kim, H. Zhang, B. Li, Y. Ji, S. Kim, and S. Ko, “Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram,” Opt. Express 23(26), 33893–33901 (2015). [CrossRef]  

19. H. Nishi and K. Matsushima, “Rendering of specular curved objects in polygon-based computer holography,” Appl. Opt. 56(13), F37–F44 (2017). [CrossRef]  

20. M. Bayraktar and M. Özcan, “Method to calculate the far field of threedimensional objects for computer-generated holography,” Appl. Opt. 49(24), 4647–4654 (2010). [CrossRef]  

21. J. Chen and D. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

22. H. Zhang, L. Cao, and G. Jin, “Computer-generated hologram with occlusion effect using layer-based processing,” Appl. Opt. 56(13), F138–F143 (2017). [CrossRef]  

23. S. C. Kim, J. M. Kim, and E. S. Kim, “Effective memory reduction of the novel look-up table with onedimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef]  

24. S. C. Kim, X. B. Dong, M. W. Kwon, and E. S. Kim, “Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table,” Opt. Express 21(9), 11568–11584 (2013). [CrossRef]  

25. S. C. Kim, X. B. Dong, and E. S. Kim, “Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up-table method for holographic three-dimensional television broadcasting,” Sci. Rep. 5(1), 14056 (2015). [CrossRef]  

26. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using runlength encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015). [CrossRef]  

27. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017). [CrossRef]  

28. Z. Zeng, H. Zheng, Y. Yu, and A. K. Asundic, “Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm,” Opt. Lasers Eng. 93, 47–54 (2017). [CrossRef]  

29. H. Araki, N. Takada, S. Ikawa, H. Niwase, Y. Maeda, M. Fujiwara, H. Nakayama, M. Oikawa, T. Kakue, T. Shimobaba, and T. Ito, “Fast time-division color electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator,” Chin. Opt. Lett. 15(12), 120902 (2017). [CrossRef]  

30. H. Niwase, N. Takada, H. Araki, Y. Maeda, M. Fujiwara, H. Nakayama, T. Kakue, T. Shimobaba, and T. Ito, “Real-time electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator and the InfiniBand network,” Opt. Eng. 55(9), 093108 (2016). [CrossRef]  

31. H. Niwase, N. Takada, H. Araki, H. Nakayama, A. Sugiyama, T. Kakue, T. Shimobaba, and T. Ito, “Real-time spatiotemporal division multiplexing electroholography with a single graphics processing unit utilizing movie features,” Opt. Express 22(23), 28052–28057 (2014). [CrossRef]  

32. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51(30), 7303–7307 (2012). [CrossRef]  

33. C. Gao, J. Liu, X. Li, G. Xue, J. Jia, and Y. Wang, “Accurate compressed look up table method for CGH in 3D holographic display,” Opt. Express 23(26), 33194–33204 (2015). [CrossRef]  

34. H. Zhang, J. Xie, J. Liu, and Y. Wang, “Elimination of a zero-order beam induced by a pixelated spatial light modulator for holographic projection,” Appl. Opt. 48(30), 5834–5841 (2009). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Diagram of the proposed method to generate the CGH.
Fig. 2.
Fig. 2. Comparison of offline computation time by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods. Figure 2(a) is the time of monochrome LUTs, and Fig. 2(b) is the computation time of color LUTs.
Fig. 3.
Fig. 3. Comparison of online computation time by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods. Figure 3(a) is the time of monochrome holograms, and Fig. 3(b) is the time of color holograms.
Fig. 4.
Fig. 4. Numerical simulation results by using different methods focused on different distances. Figures 4(a), 4(e) and 4(i), Figs. 4(b), 4(f) and 4(j), Figs. 4(c), 4(g) and 4(k), Figs. 4(d), 4(h) and 4(l) are reconstructed results by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods, respectively. Here, Figs. 4(a)–4(d), Figs. 4(e)–4(h), Figs. 4(i)–4(l) are focused on 200 mm, 250 mm, 300 mm, respectively.
Fig. 5.
Fig. 5. Numerical simulation results by using AHC-LUT method. Figures 5(a) and 5(b) are the monochrome results focused on 200mm and 220mm, respectively. Figures 5(c) and 5(d) are the color results focused on 200mm and 220mm, respectively.
Fig. 6.
Fig. 6. Setup of the holographic display system: SLM is the spatial light modulator, PC is the personal computer, L1 and L2 are the Fourier transform lens.
Fig. 7.
Fig. 7. Optical experimental results by using different methods focused on different distances. Figures 7(a), 7(e) and 7(i), Figs. 7(b), 7(f) and 7(j), Figs. 7(c), 7(g) and 7(k), Figs. 7(d), 7(h) and 7(l) are reconstructed results by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods, respectively. Here, Figs. 7(a)–7(d), Figs. 7(e)–7(h), Figs. 7(i)–7(l) are focused on 200mm, 250mm, 300mm, respectively.
Fig. 8.
Fig. 8. Optical experimental results by using AHC-LUT method. Figures 8(a) and 8(b) are the monochrome results focused on 200mm and 220mm, respectively. Figures 8(c) and 8(d) are the color results focused on 200mm and 220mm, respectively.

Tables (5)

Tables Icon

Table 1. Complexity and memory usage by using S-LUT, C-LUT, AC-LUT, and AHC-LUT methods

Tables Icon

Table 2. Distortion ratio by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods

Tables Icon

Table 3. CGH computation parameters

Tables Icon

Table 4. Memory usage by using S-LUT, C-LUT, AC-LUT and AHC-LUT method

Tables Icon

Table 5. Distortion ratio by using S-LUT, C-LUT, AC-LUT and AHC-LUT methods

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

H ( x p , y q ) = j = 0 N 1 A j exp [ i ( k r j + ϕ j ) ]
r j = ( ( x p x j ) 2 + ( y q y j ) 2 + ( d z j ) 2 ) 1 / 2
H ( x p , y q ) = j = 1 N A j exp { i k [ ( d z j ) + ( x p x j ) 2 + ( y q y j ) 2 2 ( d z j ) ] }
H ( x p , y q ) = j = 1 N A j exp [ i k ( d z j ) ] { exp [ ( x p x j ) 2 + ( y q y j ) 2 2 ] } ( i k d z j )
H ( x p , y q ) = j = 1 N A j exp [ i k ( d z j ) ] { exp [ ( x p x j ) 2 2 ] exp [ ( y q y j ) 2 2 ] } ( i k d z j )
H ( x p , y q ) = j = 1 N A j L 1 ( z j , λ ) ( H ( x p , x j ) V ( y q , y j ) ) L 2 ( z j , λ )
H ( x p , y q ) = j z = 1 N z [ j x y = 1 N x y A j x y ( H ( x p , x j x y ) V ( y q , y j x y ) ) L 2 ( z j z , λ ) ] L 1 ( z j z , λ )
H ( x p , y q ) = j z = 1 N z {   j x = 1 N x [ j y = 1 N y A j y V ( y q , y j y ) L 2 ( z j z , λ ) ] H ( x p , x j x ) L 2 ( z j z , λ ) }   L 1 ( z j z , λ )
H ( x p , y q ) = j z = 1 N z { j x = 1 N x [ j y = 1 N y A j y V ( y q y j y , y m ) L 2 ( z j z , λ ) ]   H ( x p x j x , x m ) L 2 ( z j z , λ ) } L 1 ( z j z , λ )
Resolution of  H ( x p , x m )   : p + N x Δ x Resolution of  V ( y q , y m )   : q + N y Δ y
/ / offline computation ,  to build a LUT For x p + N x Δ x  of hologram and  x m  of 2D image planes H ( x p , x m ) = exp [ ( x p x m ) 2 / 2 ] End For y q + N y Δ y  of hologram and  y m  of 2D image planes V ( y q , y m ) = exp [ ( y p y m ) 2 / 2 ] End
/ / online computation ,  to read out the data from LUT and generate the hologram  For each  z j  For each  x j  that  A j 0  (j = 0,1 N x 1 )  For each  y q  of hologram and each  y j  that  A j 0  and   have the same  x j (j = 0,1 N y 1 )  V = A j ( V ( y q y j , y m ) L 2 ( z j z , λ ) ) + V ;    End  For each  x p , y q  of hologram H V = V ( H ( x p x j , x m ) L 2 ( z j z , λ ) ) + H V ;  End  End  For each  x p , y q  of hologram H ( x p , y q ) = H V L 1 ( z j z , λ ) + H ( x p , y q ) ;  End End
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.