## Abstract

We proposed a simple and fast algorithm for computer-generated hologram (CGH) based on pinhole-type II using a look-up table. The method consists of two steps: in the first step, the unity amplitude diffraction pattern of the center pinhole on hologram plane is pre-calculated and stored as many sub-regions. Secondly, diffraction patterns for other pinholes are obtained by simply shifting and tiling the pre-calculated sub-regions, and the final CGH is obtained by adding them all together. The calculation time is short because only addition and multiplication of the stored diffraction pattern are required. In addition, the required memory space is small since only one diffraction pattern is stored. Numerical simulation and reconstruction are performed on both plane object and object with continuous depth profile to verify the proposed method. Result shows that the proposed method can easily achieve real-time hologram generation with several CPU threads running simultaneously.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Holography has been considered as an ideal three-dimensional (3D) display method since it was first proposed by Gabor in 1948 [1]. It can reconstruct the optical wave field of a 3D scene so that full depth cues can be observed directly by human eyes [2]. However, one problem of holographic 3D display is the difficulty of hologram recording, which requires coherent illumination, dark environment and bulky optical system. Computer generated holograms (CGH) has been one promising solution due to the advancement of computer. CGH calculates the hologram numerically based on the full 3D information of a computer generated (CG) object. In CGH, the CG object is often approximated as a collection of points [3] or slices [4], and hologram is generated by summing the diffraction fringe patterns of each point or slice [5]. To accelerate the computation for real time holographic 3D display, many methods have been proposed, such as the look-up table (LUT) method [6,7]and wave-recording plane (WRP) [8, 9] method. However, CGH is available only for CG objects and not for objects in the real world. Thus, the hologram acquisition of 3D objects in the real world is still a problem.

One effective approach for generating hologram of real objects is to use integral imaging (II) [10–14]. Integral imaging is a 3D imaging and display technique using a micro-lens array. It can record 3D information of moving objects under natural light. Mishina et al. first proposed to produce holograms by calculation from elemental images (EI) captured by II [10]. By numerically simulating the reconstruction process of 3D image in II using Fresnel diffraction theory, the complex field of the integrated 3D image is obtained on the Fresnel hologram plane. Shaked et al. calculated the integration of each EI after multiplying it with a slanted plane wave to form each point in Fourier hologram [11]. Park et al. proposed to generate exact Fourier and Fresnel hologram using sub-images instead of EIs [12]. Each subimage is an orthographic projection geometry view image extracted from EIs. Yeom et al. proposed to use integral imaging to generate depth-slice images of real objects, and phase-only hologram was generated using iterative Fresnel transform algorithm for multiple planes [13]. Lee et al. proposed to generate Fourier hologram based on the light field of real objects captured by a plenoptic camera [14]. The above methods include complex calculation of diffraction integral and EI processing, which cause heavy computation burden and long running time, and make it difficult for real-time processing. The holographic stereogram (HS) based methods do good in high-speed hologram calculation, due to the use of fast Fourier transform (FFT). These methods compute FFT of multi-view images to generate elemental holograms and put them together to form an entire hologram [15–20].

In this paper, we propose a simple and fast calculation algorithm of CGH using a look-up table based on pinhole type II [21]. Look-up table (LUT) methods [6, 7] pre-calculate and store the diffraction fringe patterns on a CGH of point light sources, then, one can reference the LUT during the CGH calculation. In pinhole type II, each pinhole can be thought as a point light source which emits different light rays in different directions. The diffraction fringe pattern of one pinhole on the CGH can be divided into many square sub-regions, and each sub-region is related to one pixel in the corresponding EI. We only pre-calculate and store the sub-regions of the unity amplitude diffraction pattern of the center pinhole, and diffraction patterns for other pinholes can be obtained by simply shifting and tiling the pre-calculated sub-regions.

## 2. Principle

Figure 1(a) shows the principle of proposed method. The 3D image is reconstructed by the pinhole type II system, and the CGH plane is used to record the object beam by simulating light propagation from pinholes to CGH. The light emitted from each pixel of EI can only pass through the corresponding pinhole, so each pinhole can be thought as a point light source which emits different pyramid-shaped light rays in different directions, as shown in Fig. 1(b), in which only the center pinhole is drawn. The intensities of different light rays are determined by the pixel values of EI. To the center pinhole, the recorded complex amplitude *u*(*x*, *y*) on the CGH can be divided into many square sub-regions, and the number of sub-regions is equal to that of pixels of EI. Assuming that the sub-region number or pixel number of EI in one dimension is *N*, the (*k*, *l*)-th sub-region *u _{kl}*(

*x*,

*y*) is related to the (

*N*+ 1-

*k*,

*N*+ 1-

*l*)-th pixel of the center EI, and can be calculated using ray-tracing technique:

*x*,

_{p}*y*) and a coordinate (

_{p}*x*,

*y*) on the (

*k*,

*l*)-th sub-region of CGH, and

*L*is the perpendicular distance between the CGH and the pinhole array, and $E{I}_{N+1-k,N+1-l}^{0,0}$ is the intensity value of the (

*N*+ 1-

*k*,

*N*+ 1-

*l*)-th pixel of the center EI. Here, the superscript (0,0) means that the center EI is numbered as the (0,0)-th EI. And

*p*is the pitch of pinhole or EI, and

*g*is the distance between pinhole array and EI array. It is noted that the pinhole is considered as a point here so that we do not need to consider the diffraction of a pinhole with limited aperture. By summing all of the sub-regions, we get the complex amplitude

*u*(

*x*,

*y*) of the center pinhole on the CGH:

Next, the LUT method is applied to Eq. (1). The unity amplitude diffraction pattern of the (*k*, *l*)-th sub-region *v _{kl}*(

*x*,

*y*) is pre-calculated and stored as:

*u*(

*x*,

*y*) of the center pinhole on the CGH is obtained as:

*v*(

_{kl}*x*,

*y*) and combined to generate the diffraction pattern. So far, the unity amplitude diffraction pattern of the center pinhole is pre-calculated as $N\times N$sub-regions and is stored in a LUT. And the diffraction patterns for other pinholes can be obtained by simply shifting and tiling the pre-calculated sub-regions.

For the (*m*, *n*)-th pinhole, the unity amplitude diffraction pattern can be obtained by simply shifting the pre-calculated sub-regions *v _{kl}*(

*x*,

*y*) with amounts of

*mp*and

*np*in the direction of

*x*and

*y*and adding them together. The total diffraction pattern on the CGH

*U*(

_{total}*x*,

*y*) can be expressed in terms of the shifted versions of pre-calculated sub-regions

*v*(

_{kl}*x*,

*y*) of Eq. (3) as shown in Eq. (5):

*m*and

*n*are the sequence numbers of EI in the direction of

*x*and

*y*, and

*M*is the number of EI in one dimension. Here, a random phase distribution ${\phi}_{random}$in the range [0, 2$\pi $) is added to elemental images to equalize the power spectrum on CGH. Equation (5) shows that the diffraction pattern of an integrated image can be obtained just by shifting the sub-regions

*v*(

_{kl}*x*,

*y*) depending on the displaced values of pinholes from the center pinhole and adding them all together.

Finally, the CGH is calculated. An amplitude-type CGH can be obtained by calculating the light intensity $I(x,y)={\left|{U}_{total}(x,y)+R(x,y)\right|}^{2}$, where *R*(*x*,*y*) is a reference light. An phase-only CGH can be obtained by calculating $\theta (x,y)=\mathrm{arg}\left\{{U}_{total}(x,y)\right\}$, where the operator arg{} calculates the argument of *U _{total}*(

*x*,

*y*).

The required memory space is very small since only the unity amplitude diffraction pattern of the center pinhole is stored. The resolution of the unity amplitude diffraction pattern is $\frac{pL}{g{p}_{cgh}}\times \frac{pL}{g{p}_{cgh}}$ pixels, where *p _{cgh}* is the sampling pitch on CGH. Thus, the memory size is given as $2\times \frac{pL}{g{p}_{cgh}}\times \frac{pL}{g{p}_{cgh}}\times 8bits=\frac{2{p}^{2}{L}^{2}}{{g}^{2}{p}_{cgh}^{2}}bytes$, where, “2” means the calculations of the real and imaginary parts. It is noted that the maximum viewing angle $\theta $ of II should be smaller than the maximum diffraction angle $\phi $ of the CGH to avoid aliasing [10], giving the following relationship:

*p*= 0.25mm,

*g*= 5mm,

*L*= 30mm, and

*p*= 10μm, the resolution of the unity amplitude diffraction pattern is 150$\times $150, and the memory size is about 44 Kbytes. It is interesting that if we reduce the value of

_{cgh}*L*(i.e. place CGH plane closer to the pinholes), the computational amount and memory size are both reduced because the size of the diffraction pattern of pinhole is reduced. Thus, the computational time and memory size can be further reduced by reducing the value of

*L*. For generating a Fresnel hologram with a large value of

*L*, we can first set a virtual plane close to pinhole plane to record the complex amplitude of the pinholes, and the complex amplitude on CGH can be calculated using Fresnel diffraction from the virtual plane to CGH as shown in Fig. 2(a), just like the WRP method in [8]. The massive diffraction integral computation and EI processing in [10–14] are not needed in proposed method, and only multiplication and addition of the stored diffraction pattern are required. The computational amount between pinhole plane and virtual plane can be expressed as $2\frac{{p}^{2}{L}^{2}}{{g}^{2}{p}_{cgh}^{2}}\cdot {M}^{2}$, where, “2” means the calculations of the real and imaginary parts. The proposed method is similar to the point-based LUT method [7], but the difference is that the numerous object points are replaced by a small number of pinholes, making a high computational speed.

Since the CGH is generated by recording the integrated optical field of pinhole type II, the drawback of the proposed method is that it inherits the low resolution and limited depth of field (DOF) of II. In holographic display process, as shown in Fig. 2(b), the CGH will reproduce light rays which converge at pinholes. Thus, at the reconstructed object plane (ROP), the resolution *R*_{o} is given as:

*l*is the distance between ROP and pinhole plane. To enhance the resolution, on the one hand, one can reduce

*p*/

*N*by using a high-resolution camera for the II capture. On the other hand, one can reduce

*l*by placing ROP close to pinhole plane, that is, capture a close object using II. An effective solution is to use plenoptic camera [14], in which object is imaged at the neighborhood of micro-lens array through a camera lens.

Since the ROP should be close to pinhole plane to avoid resolution degradation, the DOF is limited. We can set multiple pinhole plane to enhance the DOF. Specifically, we can use plenoptic camera to capture objects at different depths separately, and set multiple pinhole plane at different depths according to the object depths. The unity amplitude diffraction pattern of center pinhole in each pinhole plane is pre-calculated and stored in LUT, and CGH is generated by separately calculating the diffraction pattern of each pinhole plane and adding them together.

## 3. Numerical simulation and reconstruction results

We performed numerical simulation to verify the proposed method. Instead of shooting real objects using a plenoptic camera, we simply generated the EIs of two letters “3” and “D” using computational II. The two letters “3” and “D” are −10mm and 10mm distant from the pinhole array respectively as shown in Fig. 3(a). The number of the pinholes is 40$\times $40, and each EI contains 25$\times $25 pixels. The computer-generated elemental image array is shown in Fig. 3(b). A virtual plane was set 15mm distant from the pinhole array to record complex amplitude of the pinholes, and then a Fresnel CGH was calculated using Fresnel diffraction. The parameters are set as: *p* = 0.25mm, *g* = 5mm, *λ* = 600nm, and the sampling pitch on the virtual plane is *p _{vp}* = 10μm. Thus, the maximum viewing angle of II $\theta ={\mathrm{tan}}^{-1}(p/2g)\approx {1.43}^{\circ}$ is smaller than the maximum diffraction angle on the virtual plane $\phi \text{=}{\mathrm{sin}}^{-1}(\lambda /2{p}_{vp})\approx {1.72}^{\circ}$. The resolution of the unity amplitude diffraction pattern of the center pinhole on the virtual plane is 75$\times $75, resulting in a memory size of about 11 Kbytes, and Fig. 3(c) shows its amplitude and phase. Then it was divided into 25$\times $25 sub-regions. Using Eq. (5), the complex amplitude on virtual plane is recorded with 1000$\times $1000 pixels. After Fresnel diffraction of 500mm distance, a Fresnel hologram is generated also with 1000$\times $1000 pixels, and its amplitude and phase are shown in Fig. 3(d). The sampling pitch on the CGH is given as ${p}_{cgh}=z\lambda /n{p}_{vp}=0.5\times 6\times {10}^{-7}/(1000\times {10}^{-5})=30\mu m$, where

*z*is the diffraction distance and

*n*is the pixel number. The simulation was performed in MATLAB R2016b with an intel E3-1230 processor (3.4 GHz) and memory of 8 Gbytes. The generation of the complex amplitude on the virtual plane took about 102ms, and the Fresnel diffraction calculation took about 44ms, so the total calculation time was about 146ms. Since we just used one of the eight CPU threads, the calculation time can be further reduced to achieve real-time processing if we use the eight CPU threads simultaneously.

Figure 4 shows the numerical reconstruction result at different distances from CGH. It can be seen that at 505mm and 525mm, the two letters “3” and “D” can be reconstructed correctly respectively.

Next, the proposed method was applied to 3D objects with continuous depth profile. Figure 5(a) shows the bee model built in the 3ds Max modeling software. Figure 5(b) shows the distances to the virtual camera array from different parts of the object. The parameters are the same as in the above simulation. The camera array consisted of 40$\times $40 virtual cameras, and the pitch and focal length were set as *p* = 0.25mm and *g* = 5mm. Figure 5(c) shows the captured EI array which contains 1000$\times $1000 pixels. A virtual plane was set 15mm distant from the pinhole array to record complex amplitude of the pinholes, and then a Fresnel CGH with 1000$\times $1000 pixels was calculated at a distance of 500mm from the virtual plane. Figure 5(d) shows its amplitude and phase. The generation of the complex amplitude on the virtual plane took about 138ms, and the Fresnel diffraction calculation took about 41ms, so the total calculation time was about 179ms. Since in the test program, 40$\times $40 serial loops were executed to calculate the complex amplitude on virtual plane, leading to a long running time, the calculation time can be reduced by program optimization or running several CPU threads simultaneously to achieve real-time processing.

Figure 6 shows the numerical reconstruction results at different distances from CGH. It can be clearly seen that different parts of the bee are reconstructed at different depths: the rear feet at 493.7mm, the stomach at 494.9mm, the front feet at 497.2mm and the eyes at 499.8mm. Figure 7 shows the numerical reconstruction results from nine different viewing directions, which can be generated from different parts of the CGH. The relative shift between different parts is clearly observed, confirming successful reconstruction of the 3D object.

Since the HS based methods [15–20] are able to calculate at high speed because FFT is generally used, we compare the computational amounts of HS based methods and proposed method. The HS based methods compute FFT of multi-view images to generate elemental holograms and put them together to form an entire hologram. Assuming *M$\times $M* multi-view images each with *N$\times $N* pixels, the total computational amount of FFT is ${M}^{2}\beta {N}^{2}{\mathrm{log}}_{2}N$, where $\beta $ is the number of arithmetic operations (e.g.,additions and multiplications) in the FFT. While in our proposed method, the computational amount is$2\frac{{p}^{2}{L}^{2}}{{g}^{2}{p}_{cgh}^{2}}\cdot {M}^{2}$, where, “2” means the calculations of the real and imaginary parts. Note that it is not the minimum expression, since we can set a virtual plane very close to the pinhole plane to reduce value of *L*. When reducing *L* to be $Ng{p}_{cgh}/pL$, each sub-region on virtual plane only contains 1 pixel, and the minimum computational amount is $2{M}^{2}{N}^{2}$, which is smaller than ${M}^{2}\beta {N}^{2}{\mathrm{log}}_{2}N$, especially when *N* is large. Thus, even though the HS method is very fast by using FFT, the proposed method still has some advantages in speed compared with HS method.

Next, the calculation time is shown by simulation in MATLAB R2016b using the same elemental image array. The running time of HS can be considered as the FFT operation time of 40$\times $40 elemental images each with 25*$\times $*25 pixels. The running time is 83ms and is even shorter than the proposed method, 138ms. However, 138ms is not the shortest time of the proposed method. We further reduced *L* to 5mm, and got a running time of 27ms, which is shorter than HS method. It is noted that *N* = 25 is not large, so that the proposed method is not much faster than HS method. When the resolution of elemental image or multi-view image increases, the speed advantage of the proposed method will appear.

Numerical simulation was performed using the method in [19], in which each elemental image is converted into an elemental hologram by FFT. The same elemental image array as the above simulation was used. The numerical reconstruction result is shown in Fig. 8, and the resolution is very low. That is because each elemental hologram emits plane wave to reconstruct one object point [16, 17], which is very similar to depth priority integral imaging (DPII) [22]. And the resolution is determined by the elemental hologram number, which is only 40$\times $40 in the case. Thus, the HS based methods need sufficient multi-view images (elemental images) to ensure image quality. While the proposed method uses spherical waves to reconstruct object, which is similar to resolution priority integral imaging (RPII) [22], leading to higher resolution than HS method. In exchange, the depth range of the proposed method is lower than HS method, just like the characteristics of DPII and RPII [22].

## 4. Conclusion

A simple and fast algorithm for CGH based on pinhole-type II using a look-up table was proposed. The algorithm is based on simulating light propagation from pinhole array to hologram plane. The light propagation is not simulated by diffraction calculation, instead, each pinhole is considered as a point light source, so the point-based LUT method can be applied to reduce calculation time. Numerical simulation and reconstruction are performed on both plane object and object with continuous depth profile to verify the proposed method. With several CPU threads simultaneously running, real time hologram generation can be achieved easily.

## Funding

Science and Technology Major Project of Anhui Province, China (No. 16030901001).

## References and links

**1. **D. Gabor, “A new microscopic principle,” Nature **161**(4098), 777–778 (1948). [CrossRef] [PubMed]

**2. **P. W. M. Tsang and T. C. Poon, “Review on the state-of-the-art technologies for acquisition and display of digital holograms,” IEEE Trans. Industr. Inform. **12**(3), 886–901 (2016). [CrossRef]

**3. **T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Review of fast calculation techniques for computer-generated holograms with the point-light-source-based model,” IEEE Trans. Industr. Inform. **13**(5), 2447–2454 (2017). [CrossRef]

**4. **H. Zhang, L. Cao, and G. Jin, “Computer-generated hologram with occlusion effect using layer-based processing,” Appl. Opt. **56**(13), F138–F143 (2017). [CrossRef] [PubMed]

**5. **T. Shimobaba, T. Kakue, and T. Ito, “Review of fast algorithms and hardware implementations on computer holography,” IEEE Trans. Industr. Inform. **12**(4), 1611–1622 (2016). [CrossRef]

**6. **M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging **2**(1), 28–34 (1993). [CrossRef]

**7. **S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. **47**(19), D55–D62 (2008). [CrossRef] [PubMed]

**8. **T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. **34**(20), 3133–3135 (2009). [CrossRef] [PubMed]

**9. **T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express **18**(19), 19504–19509 (2010). [CrossRef] [PubMed]

**10. **T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. **45**(17), 4026–4036 (2006). [CrossRef] [PubMed]

**11. **N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express **15**(9), 5754–5760 (2007). [CrossRef] [PubMed]

**12. **J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express **17**(8), 6320–6334 (2009). [CrossRef] [PubMed]

**13. **J. Yeom, J. Hong, J.-H. Jung, K. Hong, J.-H. Park, and B. Lee, “Phase-only hologram generation based on integral imaging and its enhancement in depth resolution,” Chin. Opt. Lett. **9**(12), 120009 (2011). [CrossRef]

**14. **S.-K. Lee, S.-I. Hong, Y.-S. Kim, H.-G. Lim, N.-Y. Jo, and J.-H. Park, “Hologram synthesis of three-dimensional real objects using portable integral imaging camera,” Opt. Express **21**(20), 23662–23670 (2013). [CrossRef] [PubMed]

**15. **K. Wakunami, M. Yamaguchi, and B. Javidi, “High-resolution three-dimensional holographic display using dense ray sampling from integral imaging,” Opt. Lett. **37**(24), 5103–5105 (2012). [CrossRef] [PubMed]

**16. **H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. **47**(19), D44–D54 (2008). [CrossRef] [PubMed]

**17. **H. Kang, E. Stoykova, and H. Yoshikawa, “Fast phase-added stereogram algorithm for generation of photorealistic 3D content,” Appl. Opt. **55**(3), A135–A143 (2016). [CrossRef] [PubMed]

**18. **M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE **1914**, 25–32 (1993). [CrossRef]

**19. **Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express **20**(19), 21645–21655 (2012). [CrossRef] [PubMed]

**20. **M. Yamaguchi, “Light-field and holographic three-dimensional displays [Invited],” J. Opt. Soc. Am. A **33**(12), 2348–2364 (2016). [CrossRef] [PubMed]

**21. **J. H. Jung, S. G. Park, Y. Kim, and B. Lee, “Integral imaging using a color filter pinhole array on a display panel,” Opt. Express **20**(17), 18744–18756 (2012). [CrossRef] [PubMed]

**22. **H. Navarro, R. Martínez-Cuenca, A. Molina-Martian, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Disp. Technol. **6**(10), 404–411 (2010). [CrossRef]