## Abstract

Double-step Fresnel diffraction (DSF) is an efficient diffraction calculation in terms of the amount of usage memory and calculation time. This paper describes band-limited DSF, which will be useful for large computer-generated holograms (CGHs) and gigapixel digital holography, mitigating the aliasing noise of the DSF. As the application, we demonstrate a CGH generation with nearly 8K × 4K pixels from texture and depth maps of a three-dimensional scene captured by a depth camera.

© 2013 OSA

## 1. Introduction

In state-of-the-art electroholography [1] and digital holography [2], we need to treat a large number of pixels to increase the quality of reconstructed images. Electroholography is a promising technique for a three-dimensional (3D) display because it is capable of reconstructing the wavefront of a 3D scene [3, 4]. Practical electroholograpy requires a high-resolution spatial light modulator (SLM) to display a computer-generated hologram (CGH) because the size of the reconstructed 3D scene is proportional to the size of a CGH and the viewing angle is in inverse proportion to the pixel pitch of the CGH. For example, Ref. [3] shows the excellent image quality of reconstructed 3D scenes from sub-gigapixel CGHs. Thus, we need to calculate a large CGH by calculating the diffraction from a 3D scene. Unfortunately, the computation required to generate such a CGH takes a long time, preventing the realization of practical electroholography [5]. To solve this problem, methods for accelerating CGH have been proposed toward real-time calculation [6–8]. Refs. 6 and 7 are the CGH calculation methods from a 3D object composed of point light sources. Ref. 8 is the CGH calculation method from a 3D scene acquired by integral imaging technique.

Digital holography is a hologram-recording technique by using an electronic device such as a CCD or CMOS camera, and the captured hologram is reconstructed on a computer by using diffraction calculation. The applications of this technique are 3D imaging, 3D microscopy (digital holographic microscopy), and so forth because of the holographic property. In order to increase the wide field-of-view, and the lateral and depth resolution powers of the reconstructed image, we need to capture a large hologram by, for example, the gigapixels achieved in the recent researches [9–11]. The diffraction calculation from such a gigapixel hologram also takes a long time and a large amount of memory.

As mentioned above, electroholography and digital holography need efficient diffraction calculation for accelerating the calculation time and decreasing the memory amount. Double-step Fresnel diffraction (DSF) [12] is an efficient calculation method in terms of the amount of memory and calculation time.

This paper describes band-limited DSF (BL-DSF) to mitigate the aliasing noise of the original DSF. Then, in order to show the effectiveness, we demonstrate an efficient approach for a large CGH, whose size is nearly 8K × 4K pixels, from texture and depth maps of a 3D scene captured by a depth camera, using BL-DSF. The merits of BL-DSF are the small amount of memory and short calculation time, compared with convolution-based diffraction calculations such as the angular spectrum method (ASM) [13].

In Section 2, we explain BL-DSF. In Section 3, we present the results of the large CGH generation. Section 4 concludes this work.

## 2. Band-limited double-step Fresnel diffraction

In Fourier optics, diffraction calculations are categorized into two forms: the first is the convolution-based diffraction and the second is Fourier transform-based diffraction. The general expression of convolution-based diffraction is as follows:

*ℱ*[·] and

*ℱ*

^{−1}[·] are the Fourier and inverse Fourier transform, respectively,

*u*

_{1}(

*x*

_{1},

*y*

_{1}) and

*u*

_{2}(

*x*

_{2},

*y*

_{2}) indicate a source and destination planes,

*p*is a point spread function and

_{z}*P*(

_{z}*f*,

_{x}*f*) =

_{y}*ℱ*[

*p*(

_{z}*x*

_{1},

*y*

_{1})] is the transfer function according to propagation distance

*z*. For example, ASM [13] uses ${P}_{z}\left({f}_{x},{f}_{y}\right)=\text{exp}\left(-2\pi iz\sqrt{1/{\lambda}^{2}-{f}_{x}^{2}-{f}_{y}^{2}}\right)$. A merit of the convolution-based diffraction is that the sampling rate on the destination plane is the same as that on the source plane; however a demerit is the need to expand the source and destination planes by zero-padding to avoid aliasing, which occurs due to the circular convolution property of Eq.(1). It takes a large amount of memory and long calculation time.

Meanwhile, single-step Fresnel diffraction (SSF) is a Fourier transform-based diffraction [13]. SSF is expressed as follows:

*x*

_{1},

*y*

_{1}) = ((

*m*

_{1}−

*N*/2)

_{x}*p*

_{x1}, (

*n*

_{1}−

*N*/2)

_{x}*p*

_{y1}) and (

*x*

_{2},

*y*

_{2}) = ((

*m*

_{2}−

*N*/2)

_{x}*p*

_{x2}, ((

*n*

_{2}−

*N*/2)

_{y}*p*

_{y2}), where

*m*

_{1},

*m*

_{2}∈ [0,

*N*/2 − 1] and

_{x}*n*

_{1},

*n*

_{2}∈ [0,

*N*/2 − 1], the sampling rates on the source plane are

_{y}*p*

_{x1}and

*p*

_{y1}, and those on the destination plane are

*p*

_{x2}=

*λz*/(

*N*

_{x}p_{x1}) and

*p*

_{y2}=

*λz*/(

*N*

_{y}p_{y1}):

*N*×

_{x}*N*.

_{y}SSF can calculate the light propagation at *z* by one fast Fourier transform (FFT), so that it does not need zero-padding unlike the convolution-based diffraction. Thus, it is an efficient approach in terms of the memory and the calculation time required; however, the sampling rates on the destination plane are changed by the wavelength and propagation distance.

To overcome this problem, DSF was proposed [12]. It calculates the light propagation between the source plane and the destination plane by two SSFs, via a virtual plane. The first SSF calculates the light propagation between the source plane and the virtual plane at distance *z*_{1}. The sampling rates on the virtual plane are *p*_{xv} = *λz*_{1}/(*N _{x}p*

_{x1}) and

*p*

_{yv}=

*λz*

_{1}/(

*N*

_{y}p_{y1}). The second SSF calculates the light propagation between the virtual plane and the destination plane at distance

*z*

_{2}. The sampling rates on the destination plane are

*p*

_{x2}=

*λz*

_{2}/(

*N*

_{x}p_{xv}) = |

*z*

_{2}/

*z*

_{1}|

*p*

_{x1}and

*p*

_{y2}=

*λz*

_{2}/(

*N*

_{y}p_{yv}) = |

*z*

_{2}/

*z*

_{1}|

*p*

_{y1}. The total propagation distance is

*z*=

*z*

_{1}+

*z*

_{2}, where

*z*

_{1}and

*z*

_{2}are acceptable for minus distance. DSF introducing the rectangular function for band limitation, which is referred as to BL-DSF, is expressed as follows:

^{sgn}^{(z)}means forward FFT when the sign of

*z*is plus and inverse FFT when it is minus. The rectangular function is introduced for band-limiting chirp function $\text{exp}\left(\frac{i\pi z\left({x}_{v}^{2}+{y}_{v}^{2}\right)}{\lambda {z}_{1}{z}_{2}}\right)=\text{exp}\left(2\pi i\varphi \left({x}_{v},{y}_{v}\right)\right)$ because the result of the first SSF can be regarded as the frequency domain. Aliasing will occur in the absence of the rectangular function. We determine the band-limiting area as follows:

#### 2.1. Performance

BL-DSF is an efficient method in terms of the amount of memory and calculation time. For briefly, we assume the sizes of the source plane and the destination plane are *N _{x}* =

*N*=

_{y}*N*in the following discussion. Convolution-based diffraction needs to extend the size of the source and the destination planes to be at least four times as large as the original ones to avoid circular convolution, so that the calculation time for convolution-based diffraction is proportional to 4

*N*

^{2}log

_{2}2

*N*. On the other hand, the calculation time for BL-DSF diffraction is only proportional to

*N*

^{2}log

_{2}

*N*.

We estimate the performance of BL-DSF on a CPU (Intel Core i7-2600S) and a graphics-processing unit (GPU) (NVIDIA GeForce GTX670), compared with ASM. Table 1 shows the calculation times. BL-DSF can calculate diffraction faster than the angular spectrum method. We use only one CPU thread.

The amounts of memory for ASM and BL-DSF when using a single-precision floating-point format are 32*N*^{2} bytes and 8*N*^{2} bytes, respectively. For instance, when *N* = 8,192, the angular spectrum method needs 2 GBytes, while BL-DSL needs only 512 MBytes. We could not calculate the case of *N* = 8,192 using ASM on the GPU because the required memory exceeded the maximum amount of the GPU memory (2 GBytes). While we can calculate the case of *N* = 8,192 using BL-DSF because BL-DSF requires only the GPU memory of 512 MBytes.

Figure 1 shows the real part of the complex amplitude calculated by BL-DSF from the source plane having only one point in the center, with or without the rectangular function in Eq. (4). The calculation conditions are a wavelength of 633 nm, a sampling rates on the source plane of 10 *μ*m, a propagation distance of *z* = *z*_{1} + *z*_{2} = 0.02 m and *N* = 512. Note that the sampling rate on the destination plane is changed by a rate of *z*_{2} and *z*_{1}. We do not want to change the sampling rates on the source and the destination planes, so we set *z*_{1} = *z*/2 + 500 m and *z*_{2} = *z*/2–500 m. In this case, the sampling rate on the destination plane is almost same as that on the source plane, about 9.996*μ*m. Figure 1(a) is the case when the rectangular function is absent. Aliasing noise occurs. Meanwhile, Fig. 1(b) shows mitigation of aliasing noise, introducing the rectangular function.

## 3. Application to computer-generated holograms

To show the effectiveness of BL-DSF, we demonstrate the fast calculation of a CGH using BL-DSF from a 3D scene composed of texture and depth maps captured by a depth camera or created by computer graphics that have these maps in the graphics memory.

In this experiment, Figs 2 (a) and 2(b) show the texture and depth maps of the 3D scene with about 2K × 1K pixels captured by an axi-vision camera [14]. We used a color electroholography system with nearly 8K × 4K LCD panels [8] developed by the National Institute of Information and Communications Technology (NICT), Japan. This optical system consists of RGB lasers (the wavelengths are 640 nm, 532 nm and 473 nm respectively), and three 8K × 4K amplitude-modulated LCD panels whose pixel pitch is 4.8 *μ*m to display amplitude CGHs. To eliminate the 0-th order and conjugate lights inherently arising from the amplitude CGHs, the optical system uses half-zone plate processing and the single-sideband technique [15]. Figure 1(c) shows the half-zone-processed result by limiting the lower half of the rectangular function of BL-DSF.

In CGH generation, we first convert the texture map *tex*(*m*_{1}, *n*_{1}) and depth map *dep*(*m*_{1}, *n*_{1}) to about 8K × 4K pixels. A pixel value in *dep*(*m*_{1}, *n*_{1}) indicates a certain depth, *i*, and the range of the pixel values is 0 to 255. Therefore, the physical distance is expressed as *z* + *i*Δ* _{z}* (

*i*∈ [0, 255]), where Δ

*is the physical spacing between the neighboring pixel values in the depth map.*

_{z}We calculate the complex amplitude on the CGH plane to superimpose each complex amplitude corresponding to each depth by BL-DSF as follows:

*tex*(

*m*

_{1},

*n*

_{1}) is the texture map (Fig.2(a)) and

*n*(

*m*

_{1},

*n*

_{1}) is the uniform distribution of pseudo-random numbers within 0.0 to 1.0. Function

*mask*(

_{i}*m*

_{1},

*n*

_{1}) is defined by,

In order to obtain the amplitude CGH, we take the real part, *I*(*m*_{2}, *n*_{2}), of the complex amplitude *u*_{2}(*m*_{2}, *n*_{2}). In addition, we obtain the final-amplitude CGH by taking ±2*σ* top increase the brightness of the reconstructed image, where *σ* is the standard derivation of *I*(*m*_{2}, *n*_{2}). Figure 3 shows a reconstructed 3D scene from a nearly 8K × 4K CGH using BL-DSF. The left-hand, middle and right-hand figures are photographs by changing focus.

Table 2 shows the calculation times of the CGH using BL-DSF and ASM on the CPU and GPU. The calculation times show only the generation of the monochrome CGH. When using ASM, the only difference is that BL-DSF in Eq.(7) is replaced by Eq.(1). We can calculate one CGH corresponding to one wavelength at 16.6 seconds using BL-DSF on the GPU.

## 4. Conclusion

We improved the original DSF by band-limiting the frequency domain in order to mitigate aliasing noise. Band-limitation can be applied to half-zone processing, which is a useful technique to eliminate the 0-th and conjugate lights for electroholography using an amplitude CGH. The amount of memory needed for BL-DSF is a quarter of that for convolution-based diffraction. The little memory needed is useful for low-memory devices such as GPUs. The calculation time for BL-DSF can also be accelerated, compared with convolution-based diffraction. We showed the fast generation of a 8K × 4K CGH using BL-DSF. BL-DSF will be useful for gigapixel digital holography.

## Acknowledgments

This work is supported by Japan Society for the Promotion of Science (JSPS) KAKENHI (Young Scientists (B) 23700103) 2011, and the NAKAJIMA FOUNDATION.

## References and links

**1. **S. A. Benton and V. M. Bove Jr., *Holographic Imaging* (Wiley-Interscience, 2008) [CrossRef]

**2. **U. Schnars and W. Juptner, “Direct recording of holograms by a CCD target and numerical
Reconstruction,” Appl.Opt. , **33**,
179–181 (1994) [CrossRef]

**3. **C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer **38**, 46–53 (2005) [CrossRef]

**4. **F. Yaras, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express **19**, 9147–9156 (2011) [CrossRef]

**5. **M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging , **2**, 28–34 (1993) [CrossRef]

**6. **H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

**7. **T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. **34**, 3133–3135 (2009) [CrossRef]

**8. **Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express **20**, 21645–21655 (2012) [CrossRef]

**9. **D. J. Brady and S. Lim, “Gigapixel holography,” 2011 ICO International Conference on Information Photonics (IP), 1–2, (2011) [CrossRef]

**10. **J. R. Fienup and A. E. Tippie, “Gigapixel synthetic-aperture digital holography,” Proc. SPIE **8122**, 812203 (2011) [CrossRef]

**11. **S. O. Isikman, A. Greenbaum, W. Luo, A.F. Coskun, and A. Ozcan, “Giga-pixel lensfree holographic microscopy and tomography using color image sensors,” PLoS ONE **7**, e45044 (2012) [CrossRef]

**12. **F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. **29**, 1668–1670 (2004) [CrossRef]

**13. **J. W. Goodman, *Introduction to Fourier Optics* (3rd ed.), (Robert & Company, 2005).

**14. **M. Kawakita, K. Iizuka, T. Aida, H. Kikuchi, H. Fujikake, J. Yonai, and K. Takizawa, “Axi-vision camera (real-time distance-mapping camera),” Appl. Opt. **39**, 3931–3939 (2000) [CrossRef]

**15. **T. Mishina, F. Okano, and I. Yuyama, “Time-alternating method based on single-sideband holography with half-zone-plate processing for the enlargement of viewing zones,” Appl. Opt. **38**, 3703–3713 (1999) [CrossRef]