## Abstract

We propose a fast method for generating digital Fresnel holograms based on an interpolated wavefront-recording plane (IWRP) approach. Our method can be divided into two stages. First, a small, virtual IWRP is derived in a computational-free manner. Second, the IWRP is expanded into a Fresnel hologram with a pair of fast Fourier transform processes, which are realized with the graphic processing unit (GPU). We demonstrate state-of-the-art experimental results, capable of generating a 2048x2048 Fresnel hologram of around $4\times {10}^{6}$object points at a rate of over 40 frames per second.

©2011 Optical Society of America

## 1. Introduction

Past research has demonstrated that the Fresnel hologram of a three-dimensional scene can be generated numerically by computing the fringe patterns emerged from each object point to the hologram plane. In brief, given a scene is of self-illuminating object points$O=\left[{o}_{0}\left({x}_{0},{y}_{0},{z}_{0}\right),{o}_{1}\left({x}_{1},{y}_{1},{z}_{1}\right),\mathrm{.....},{o}_{N-1}\left({x}_{N-1},{y}_{N-1},{z}_{N-1}\right)\right]$,the diffraction pattern *D*(*x*,*y*) on the hologram plane can be derived as

*j*th’ point in

*O*and its distance to the position $\left(x,y\right)$ on the diffraction plane, $k=2\pi /\lambda $ is the wavenumber and

*λ*is the wavelength of the light. Although the method is effective, the computation involved in generating a hologram is extremely high. In the past lots of research attempts have been conducted to overcome the above problems, such as the works developed in [2- 12]. Recently, a fast method has been reported by Shimobaba et al. in [13]. In their approach, Eq. (1) is first applied to compute the fringe pattern of each object point within a small window on a virtual wavefront recording plane (WRP) which is placed very close to the scene. Subsequently, the hologram is generated from the WRP with Fresnel diffraction. However, as the number of object points increases, the time taken to derive the WRP will be lengthened in a linear manner and real-time generation of holographic video sequence is not possible. In this paper, a method to overcome the limitation in [13], is proposed. Essentially, we have formulated a novel, computation-free algorithm for generating what we call an interpolated WRP (IWRP). We then expand the IWRP into a Fresnel hologram. Experimental evaluation demonstrates that our proposed method is capable of generating a 2048x2048 hologram for an object scene with around $4\times {10}^{6}$ object points in less than 25ms.

## 2. Background of the wavefront-recording plane (WRP) method

For clarity of explanation, a brief outline of the method in [13] is summarized in this section. To begin with, the following terminology is adopted. The hologram $u\left(x,y\right)$
*is a vertical, 2D image that is positioned at the origin. A virtual WRP, *
${u}_{w}\left(x,y\right)$, is placed at a depth ${z}_{w}$ from $u\left(x,y\right)$. The object scene is composed of a set of self-illuminating pixels, each having an intensity value of ${a}_{j}$ and located at a perpendicular distance of ${d}_{j}$from the WRP. Without loss of generality we assume that both the hologram, the WRP, and the object scene have the same horizontal and vertical extents of *X* and *Y* pixels, respectively, as well as identical sampling pitch *p*. The hologram generation process can be divided into two stages. In the first stage, the complex wavefront contributed by the object points is computed as

*jth*object point, and ${R}_{wj}\left(x,y\right)=\sqrt{{\left(x-{x}_{j}\right)}^{2}+{\left(y-{y}_{j}\right)}^{2}+{d}_{j}^{2}}$ is the distance of the point from the WRP. As the object scene is very close to the WRP, the diffracted beam of each object point is assumed to cover a small square window of size $W\times W$ (hereafter refer as the virtual window). As such, Eq. (2) can be rewritten aswhere ${f}_{j}=\{\begin{array}{cc}\frac{{A}_{j}}{{R}_{wj}\left(x,y\right)}\mathrm{exp}\left(i\frac{2\pi}{\lambda}{R}_{wj}\left(x,y\right)\right)& \text{if}\left|x-{x}_{j}\right|\text{and}\left|y-{y}_{j}\right|<{\scriptscriptstyle \frac{1}{2}}W\\ 0& otherwise\end{array}$

In Eq. (3), the computation of the WRP for each object point is only confined to the region of the virtual window on the WRP. As *W* is much smaller than *X* and *Y*, the computation load is significantly reduced as compared with Eq. (2). In [13], the calculation is further simplified by pre-computing the exponential terms for all combinations of $\left({x}_{j},{y}_{j},{d}_{j}\right)$, and the estimated computational amount is $2\alpha N{\overline{L}}^{2}$. $\overline{L}$ is the mean perpendicular distance of the object points to the WRP, and *α* is the arithmetic operations involved in computing the wavefront contributed by each object point. In the second stage, the WRP is expanded to the hologram as

## 3. Proposed computational-free interpolated wavefront-recording plane (IWRP) method

Our proposed method is described as follows. First, we note that the resolution of the scene image is generally smaller than that of the hologram. Hence, it is unnecessary to convert every object point of the scene to its wavefront on the WRP. On this basis, we propose to sub-sample the scene image evenly by *M* times (where $M>0$) along the horizontal and the vertical directions. Let $I\left(m,n\right)$ and $d\left(m,n\right)$ represent the intensity and distance from the WRP of the sample object point located at the $nth$ row and the $mth$ column of the object scene. With the sample pitch set to $Mp$, the physical horizontal and vertical positions of the sample point are located at ${x}_{m}=mMp+Mp/2$ and ${y}_{n}=nMp+Mp/2$, respectively, where *m* and *n* are integer values. A square support, as shown in Fig. 1a
, is defined for each sample point, with the left side ${l}_{m}$, and the right side ${r}_{m}$ given by${l}_{m}={x}_{m}-Mp/2$, and ${r}_{m}={x}_{m}+\left(M-1\right)p/2$. Similarly, the bottom side ${t}_{n}$ and top side ${b}_{n}$ of the square support are given by ${b}_{n}={y}_{n}-Mp/2$, and ${t}_{n}={y}_{n}+\left(M-1\right)p/2$. We point out that the square supports of adjacent sample points are non-overlapping and just touching each other at their boundaries. Next, we assume the contribution of each sample point is contributing to a square virtual window in the WRP with side length equals to $Mp$ as shown in Fig. 1b.

The virtual window is aligned with the square support of the object point, and the wavefront within the virtual window is only contributed by the object point in the square support. Under this approximation, Eq. (2) therefore can be re-written as

It can be inferred from Eq. (6) that the function $G\left(x-{x}_{m},y-{x}_{n},I\left(m,n\right),d\left(m,n\right)\right)$ represents a Fresnel zone plate $G\left(x,y,I\left(m,n\right),d\left(m,n\right)\right)$, within the virtual window, which is shifted to the position $\left({x}_{m},{y}_{n}\right)$. The Fresnel zone plate is contributed by an object point of intensity $I\left(m,n\right)$ and at distance $d\left(m,n\right)$ from the wavefront plane. Hence for a finite variations of $d\left(m,n\right)$ and$I\left(m,n\right)$, all the possible combinations of $G\left(x,y,I\left(m,n\right),d\left(m,n\right)\right)$ can be pre-computed in advance, and store in a look up table (LUT). For example, if the depth $d\left(m,n\right)$ and $I\left(m,n\right)$ are quantized into ${N}_{d}$ and ${N}_{I}$ levels, respectively, there will be a total of ${N}_{d}\times {N}_{I}$ combinations. As a result, in the generation of ${u}_{w}\left(x,y\right)$, each of its constituting virtual window can be retrieved from the corresponding entries in the LUT. In another words, the process is computational-free.

Although the decimated has effectively reduced the computation time, as will be shown later, the reconstructed images obtained with the WRP derived from Eq. (6) are weak, noisy, and difficult to observe. This is caused by the sparse distribution of the object points caused by the sub-sampling of the scene image. To overcome this problem, we propose the interpolated WRP (IWRP) to interpolate the associated support of each object point with padding, i.e., the object point is duplicated to all the pixels within each square support. After the interpolation, the wavefront of a virtual window will be contributed by all the object points (which are identical in intensity and depth) within the support as given by

Similar to Eq. (6), the wavefront function $={G}_{A}\left(x-{x}_{m},y-{y}_{n},I\left(m,n\right),d\left(m,n\right)\right)$ in Eq. (8) is simply a shifted version of ${G}_{A}\left(x,y,I\left(m,n\right),d\left(m,n\right)\right)$ which can be pre-computed and stored in a LUT for different combinations of $I\left(m,n\right)$ and$d\left(m,n\right)$. Consequently, each virtual window in the IWRP can be generated in a computation-free manner by retrieving, from the LUT, the wavefront corresponding to the intensity and depth of the corresponding object point. Comparing Eq. (6) and Eq. (8), it can also be inferred that the number of combination on the values of $G\left(x,y,I\left(m,n\right),d\left(m,n\right)\right)$ and ${G}_{A}\left(x,y,I\left(m,n\right),d\left(m,n\right)\right)$, and hence the size of the corresponding LUTs, are identical. After ${u}_{w}\left(x,y\right)|{}_{{l}_{m}\le x<{r}_{m},{t}_{n}\le y<{b}_{n}}$ is generated within the IWRP, Eq. (4) is applied to generate the hologram $u\left(x,y\right)$.

## 4. Experimental results

Our proposed method is evaluated with the test image Fig. 2a
. The horizontal and vertical extents of the hologram, the IWRP, and the test image are identical, comprising of 2048 by 2048 square pixels each with a size of 9 by 9 um and quantized with 8bits. The test image is divided into a left and a right part located at distances of ${z}_{1}=0.005m$ and ${z}_{s}=0.01m$ from the IWRP. Each pixel in the image is taken to generate the hologram, constituting to a total of around $4\times {10}^{6}$ object points. *λ* and the distance *z _{w}* between the WRP/IWRP and the hologram are set to 650

*nm*and 0.4m, respectively. ${N}_{d}$ and ${N}_{d}$ are both set to 256, and $M=8$, resulting in a LUT of around 4.1Mb. We decimated the source image by 8 times in the horizontal and vertical direction (i.e.,

*M*= 8 and size of virtual window = 8x8), and applied Eqs. (5) and 6 to derive a WRP. The latter is then expand it into a hologram with Eq. (4). A real, off-axis hologram $H\left(x,y\right)$ is generated by adding a planar reference wave $R\left(y\right)$ (illuminating at an inclined angle ${1.2}^{o}$ on the hologram) to $u\left(x,y\right)$, and taking the real part of the result as given by

*LCOS*) modified from the Sony VPL-HW15 Bravia projector. The projector has a horizontal and vertical resolution of 1920 and 1080, respectively. Due to the limited size and resolution of the

*LCOS*only part of the hologram (and hence the reconstructed image) can be displayed. The reconstructed images corresponding to the upper half and the lower half of the hologram are shown in Figs. 2b and 2c, respectively. We observe that the images are extremely weak and noisy. Next we repeat the above process by generating the IWRP with Eqs. (7) and 8. The reconstructed images are shown in Figs. 2d and 2e. Evidently, the reconstructed image is much clearer in appearance. To further illustrate our proposed method, we have generated a sequence of holograms of a rotating globe which is rendered with the texture of the earth image. The radius of the globe is around 0.005m, and the front tip of the globe is located at 0.01m from the IWRP. The latter is at a distance of 0.3m from the hologram. A single frame excerpt of the optical reconstructed animation clip (Media 1) is shown in Fig. 2f. It can be seen from the excerpt, as well as in the animation clip that, despite the complexity of the texture, the earth image on the globe is clearly reconstructed in every views. Next, we evaluate the computation efficiency of our proposed method. The IWRP, and its subsequent expansion to a Fresnel hologram are conducted with the PC (intel i7-950 @ 3.06GHz) and the GPU (Nvidia Geforce GTX580), respectively. The total hologram generation time and equivalent frame-rate (measure in

*fps*representing the number of hologram frames per second), versus the number of object points, are shown in Table 1 . We have assumed that the number of object points and hologram pixels are identical. From the result, it can be seen that the hologram generation time is very short as it only involves table lookup and data transfer between memory arrays. For a hologram (as well as image size) of 2048x2048 pixels, our proposed method is capable of attaining a generation speed of over 40 frames per second.

## 5. Conclusion

In this paper, we propose a method for real-time generation of Fresnel holograms. A proposed interpolated wavefront recording plane (IWRP) is first constructed with a computation-free process. Subsequently, the IWRP is expanded into a Fresnel hologram via a pair of fast Fourier transform operations that are realized with the GPU. Based on our method a hologram size of 2048x2048, representing an image scene comprising of over $4\times {10}^{6}$ points, can be generated in less than 25ms, equivalent to 40 frames per second. These results correspond to state-of-the-art speed in the calculation of CGH.

## Acknowledgments

The work is partly supported by the Chinese Academy of Sciences Visiting Professorships for Senior International scientists. Grant Number: 2010T2G17

## References and links

**1. **T.-C. Poon, ed., “Digital holography and three-dimensional display: Principles and Applications,” Springer (2006).

**2. **S. C. Kim and E. S. Kim, “Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods,” Appl. Opt. **48**(6), 1030–1041 (2009). [CrossRef]

**3. **S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. **47**(19), D55–D62 (2008). [CrossRef] [PubMed]

**4. **S.-C. Kim, J.-H. Yoon, and E.-S. Kim, “Fast generation of three-dimensional video holograms by combined use of data compression and lookup table techniques,” Appl. Opt. **47**(32), 5986–5995 (2008). [CrossRef] [PubMed]

**5. **H. Sakata and Y. Sakamoto, “Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space,” Appl. Opt. **48**(34Issue 34), H212–H221 (2009). [CrossRef] [PubMed]

**6. **T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. **46**(12), 125801 (2007). [CrossRef]

**7. **H. Yoshikawa, “Fast computation of Fresnel holograms employing difference,” Opt. Rev. **8**(5), 331–335 (2001). [CrossRef]

**8. **T. Ito, N. Masuda, K. Yoshimura, A. Shiraki, T. Shimobaba, and T. Sugie, “Special-purpose computer HORN-5 for a real-time electroholography,” Opt. Express **13**(6), 1923–1932 (2005). [CrossRef] [PubMed]

**9. **L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holography using parallel commodity graphics hardware,” Opt. Express **14**(17), 7636–7641 (2006). [CrossRef] [PubMed]

**10. **H. Kang, F. Yaraş, and L. Onural, “Graphics processing unit accelerated computation of digital holograms,” Appl. Opt. **48**(34), H137–H143 (2009). [CrossRef] [PubMed]

**11. **Y. Seo, H. Cho, and D. Kim, “High-performance CGH processor for real-time digital holography,” Laser App. Chem., Sec. and Env. Ana., OSA Tech. Digest (CD) (OSA, 2008), paper JMA9.

**12. **P. W. M. Tsang, J.-P. Liu, W. K. Cheung, and T.-C. Poon, “Fast generation of Fresnel holograms based on multirate filtering,” Appl. Opt. **48**(34), H23–H30 (2009). [CrossRef] [PubMed]

**13. **T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express **18**(19), 19504–19509 (2010). [CrossRef] [PubMed]