Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.
© 2017 Optical Society of America
Computer-generated holograms (CGHs)  can generate arbitrary wavefronts from computer-synthesized interference pattern and are used in a wide range of optical applications, including three-dimensional (3D) displays, holographic tweezers, optical elements, and beam generation. These applications require real-time interactive operations; therefore, fast CGH calculation algorithms are significantly required.
CGHs are calculated by superimposing complex amplitudes emitted from object points. The complex amplitude on a CGH plane can be expressed by the following equation:Eq. (1) is where the resolution of the CGH is Nh × Nh pixels. The superposition process is highly time-consuming even when using the latest computers.
A number of fast CGH calculation algorithms  have been proposed; e.g., look-up table (LUT) methods [3–5], recurrence relations [6–8], the image hologram method , and wavefront recording plane (WRP) methods [10–13]. LUT methods can reduce the calculation cost of the calculation of trigonometric functions and square roots, such as that illustrated in Eq. (1), by precalculating these functions and storing the precalculated data in LUTs. However, the computational complexity still remains .
The image hologram method succeeds in drastically reducing the computational complexity. In this method, the 3D object points are placed near the CGH plane so that the radius Wj in the CGH plane of the j-th PSF becomes small. The computational complexity in this case is O(W̄2N) where W̄ is the average radius of the PSFs on the CGH with respect to all object points . The average radius W̄ is determined by
WRP methods expand the image hologram method to make it applicable at an arbitrary distance from the CGH. These methods comprise two steps: (1) recording object points on a virtual plane placed near the 3D object using Eq. (1) and (2) calculating the numerical diffraction from the virtual plane to the CGH plane. The computational complexity is then given by . These WRP methods can accelerate the CGH calculation at an arbitrary distance. However, the abovementioned first step still incurs a heavy computational cost.
To further accelerate the CGH calculation, we propose a fast CGH acceleration algorithm using wavelet shrinkage – WAvelet ShrinkAge-Based superpositIon (WASABI). Wavelet shrinkage [14, 15] eliminates small wavelet coefficient values of the PSF uzj thereby resulting in an approximated PSF calculated from a few representative wavelet coefficients. The use of these representative wavelet coefficients accelerates the superposition of PSFs in the wavelet domain. Several methods using wavelet transform have been proposed for digital holography [16–19]; however, to best of our knowledge, this is first attempt at developing a wavelet-based CGH acceleration method. This paper is structured as follows: Section 2 describes the WASABI method, Section 3 demonstrates the images reconstructed using the proposed method and the method’s computational performance, and Section 4 presents our conclusions.
2. Proposed method WASABI
The superposition calculation illustrated in Eq. (1) is time-consuming. The objective of using WASABI is to superpose the PSF as quickly as possible. The WASABI calculations are performed as follows:
- Precomputation: we precompute the PSF uzj (xh, yh) and then transform the space-domain PSF into a wavelet-domain PSF. The number of wavelet coefficients is reduced by wavelet shrinkage.
- Superposition: the reduced wavelet coefficients are superposed in the wavelet domain.
- Inverse transformation: the conversion back from the wavelet domain to the space domain is performed.
In the precomputation step, a PSF, uzj (xh, yh), is calculated from an object point with (0, 0, zj) and is then transformed from the space domain PSF into the wavelet domain using a fast wavelet transform (FWT) . The computational complexity of the forward and inverse FWTs is .
According to wavelet theory, a PSF, uzj (xh, yh), is decomposed using scaling coefficients as follows:Eq. (6). Mallat suggested that the zero level of the scaling coefficients can be considered as the original signal  such that uzj (xh, yh) can be expressed as
FWT decomposes uzj (xh, yh) into scaling coefficients and wavelet coefficients , , by,Eq. (6) until the desirable level ℓ = L is achieved. Several two-scale sequences have been proposed previously. In this paper, we use Daubechies4  for the two-scale sequences and chose L = 2 because we empirically found that these settings result in good-quality reconstructed images.
After the FWT, the scaling and wavelet coefficients are generally distributed as shown in Fig. 1. Figure 1(c) shows the wavelet decomposition of a PSF (Fig. 1(b)) whose calculation conditions are zj = 100mm, a pixel pitch of 10μm and a wavelength of 633nm. The number of raw scaling and wavelet coefficients is the same as the number of pixels in the CGH plane, . If the next step (the superposition step) uses the raw coefficients, we cannot expect an acceleration of the CGH calculation; therefore, we use wavelet shrinkage, which eliminates small wavelet coefficient values to express the approximated PSF using a few representative wavelet coefficients.
After sorting the amplitude of the scaling and wavelet coefficients and where , in the descending order of amplitude we select Nr coefficients from the larger amplitudes. In this paper, Nr is defined asFig. 1, the level ℓ has different values depending on (m, n).
The number of vectors is Nz, where Nz is the number of z-direction slices for the 3D objects, and the vectors are stored in an LUT for the numerical calculation.
2.2. Superposition and inverse transformation
In the superposition step, we superpose the reduced coefficients related to all the object points in the wavelet domain. For example, if we calculate an object points with (x1, y1, z1), we first read the reduced coefficients of uz1 from the vectors vz. The reduced coefficients cz,k at a position (m, n) are then shifted to 2−ℓ x1 = αz,k x1 and 2−ℓ y1 = αz,k y1 in the horizontal and vertical directions, and the shifted coefficients are added to the wavelet space ψ(m, n). The mathematical expression of the superposition is as follows:Fig. 1, if the point “a” of czj,k is shifted to point “b,” the shifted czj,k can be superposed using Eq. (11) because the region of these points is the same. Conversely, if the point “a” of czj,k is shifted to point “c” or “d,” the shifted czj,k should not be superposed because the region of these points differs from that of point “a.” The computational complexity of Eq. (11) is O(rW̄2N), is clearly less than that obtained with Eq. (1).
In the inverse transformation step to obtain the complex amplitude in the CGH plane, the superposed coefficients ψ(m, n) are transformed from the wavelet domain to the space domain using inverse FWT. We extract new wavelet coefficients , , and from ψ(m, n); then the inverse FWT process reconstructs ithe lower level of scaling coefficients from the higher level of scaling and wavelet coefficients asEq. (5), we can consider the zero level of scaling coefficients obtained using Eq. (12) as
After precomputation, we do not need to perform the precomputation step again during the CGH calculation. We only repeat the superposition and inverse transformation steps with respect to new 3D objects; therefore, the computational complexity of the proposed method is expressed as
Figures 2(a)–2(d) show an original PSF and the approximated PSFs using WASABI with zj=0.1 m and r =20%, 10%, and 5%. Figure 3(a) shows the intensity profiles reconstructed from the original PSF of Fig. 2(a) and the approximated PSF of Fig. 2(d) with r = 5%. The intensity profiles are normalized to 1. The plots agreed well.
Figures 2(e)–2(h) show the interference patterns of two object points using Eq. (1) and WASABI with r =20%, 10%, and 5%. The result demonstrates the generation of the interference for the two points via the wavelet domain. Figure 3(b) shows the intensity profiles reconstructed from the original PSF of Fig. 2(e) and the approximated PSF of Fig. 2(h) with r = 5%.
In this section, we compare the computational performance of WASABI and the quality of images reconstructed using it with those of the conventional look-up table method . The calculation conditions involved a wavelength of 633 nm, a CGH sampling interval of 10μm, and a CGH resolution of 2, 048 × 2, 048 pixels. The depth range of the 3D images was 5mm, equally divided into 10 steps (Nz = 10). We used a computer with Intel Core i7 6700 as a CPU, a memory of 64 GB, a Microsoft Windows 8.1 operating system, and a Microsoft Visual Studio C++ 2013 compiler. All of the calculations were parallelized across eight CPU threads.
Figure 4(a) shows the reconstructed images of a dinosaur composed of 11,646 object points at two different distances of 0.5 m and 0.05 m. The images in the top row of Fig. 4(a) were obtained using the conventional method. The reconstructed images in the middle row were obtained via WASABI with r=5%. The peak signal-to-noise ratios (PSNRs) of the reconstructed images are 30.2 dB at a distance of 0.5 m and 29.7dB at a distance of 0.05 m, respectively. The reconstructed images in the bottom row were obtained via WASABI with r=1%. The PSNRs of the reconstructed images are 29.3 dB at a distance of 0.5 m and 28.9dB at a distance of 0.05 m, respectively. The PSNRs of WASABI with r=1% are lower than those obtained in the r=5%case. However, in comparison to WASABI with r=5%, the acceleration of the calculation time is more in WASABI with r=1%.
Table 1 shows the calculation times for the conventional method and WASABI. At a distance of 0.5 m, the calculation times for the conventional method, WASABI with r=5%, and WASABI with r=1% are approximately 33, 5.6 and 0.93 s, respectively. WASABI with r=1% improves the calculation speed approximately about 36-fold compared with the conventional method. Conversely, at a distance of 0.05 m, the improvement caused by WASABI with r=1% is only approximately 2.7-fold because of the superposition and inverse FWT steps of WASABI.
According to the definition of the computational complexity illustrated in Eq. (14), the first step is proportional to the number of object points and the area of the PSF, and the second step is depended only on the CGH size and not on the number of object points and the area of the PSF. Because the 3D model dinosaur is composed of a small number of object points (11,646) and the area of the PSF is small because of the short distance of 0.05 m, the calculation time for the superposition step is only approximately 10 ms. Meanwhile, the calculation time for the inverse FWT is approximately 144 ms. Therefore, under these conditions, WASABI does not result in significant acceleration of the superposition step.
In the case of a sufficiently large number of object points and/or a long distance between a 3D object and the CGH, the computational complexity of WASABI can be calculated as
Figure 4(b) shows the reconstructed images of a merry-go-round composed of approximately 100,000 points. The reconstructed images in the top row of Fig. 4(b) were obtained using the conventional method at two different distances of 0.5 m and 0.05 m. The reconstructed images in the middle row were obtained using WASABI with r=5%. The PSNRs of the reconstructed images to the conventional method are 30.6 dB at a distance of 0.5 m and 30.2 dB at a distance of 0.05 m, respectively. The reconstructed images in the bottom row were obtained via WASABI with r=1%, and the PSNRs of the reconstructed images are 29.6 dB at a distance of 0.5 m and 29.4 dB at a distance of 0.05 m, respectively. The PSNRs obtained using WASABI with r=1% are lower than those obtained using WASABI with r=5%; however, the calculation time is further accelerated. For the distance of 0.5 m, as shown in Table 2, the calculation time for the conventional method, WASABI with r=5%, and WASABI with r=1% are approximately 281, 45, and 6.9 s, respectively. WASABI with r=1% provides approximately 41-fold improvement in speed over the conventional method. At a distance of 0.1 m, WASABI can calculate the CGHs from one million object points at approximately 2.8 frames per second (fps) using the CPU. In addition, at a distance of 0.05 m, WASABI can calculate the CGHs at approximately 5 fps using the CPU.
In the case of a greater number of object points, WASABI shows further increased acceleration. Figure 4(c) shows the reconstructed images of a fountain composed of approximately one million points. At the distance of 0.5 m in Table 3, the calculation times of the conventional method, WASABI with r=5% and WASABI with r=1% are approximately 2,835, 361 and 57 s, respectively. WASABI with r=1% shows approximately 50-fold improvement in speed over the conventional method. At a distance of 0.05 m, WASABI can calculate the CGHs from one million object points at approximately 1.7 fps using the CPU. All of these calculations were performed using our numerical wave optics library, CWO++ .
We proposed a CGH acceleration algorithm using a wavelet shrinkage-based superposition method called WASABI, in which approximated PSFs are expressed by a few representative wavelet coefficients. We showed that this method can accelerate the superposition of many PSFs in the wavelet domain. Our future work will involve the application of the proposed method to the WRP method [10–13] to further acceleration of CGH calculations for long distance between 3D objects and a CGH plane. In addition, we will attempt to apply the proposed method to cylindrical and spherical CGH calculation, and holographic stereogram-based calculation.
This work was partially supported by JSPS KAKENHI Grant Numbers 16K00151 and 25240015.
We thank to Dr. Hirotaka Nakayama for providing 3D models and thank to Dr. Takashi Kakue for kind support.
References and links
1. T. C. Poon, ed., Digital Holography and Three-Dimensional Display - Principles and Applications (Springer, 2006). [CrossRef]
2. T. Shimobaba, T. Kakue, and T. Ito, “Review of fast algorithms and hardware implementations on computer holography,” IEEE Trans. Ind. Informat. 12, 1611–1622 (2016). [CrossRef]
3. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2, 28–34 (1993). [CrossRef]
5. T. Nishitsuji, T. Shimobaba, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23, 9852–9857 (2015). [CrossRef] [PubMed]
6. K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39, 6587–6594 (2000). [CrossRef]
7. H. Yoshikawa, “Fast computation of Fresnel holograms employing difference,” Opt. Rev. 8, 331–335, (2001). [CrossRef]
8. T. Shimobaba and T. Ito, “An efficient computational method suitable for hardware of computer-generated hologram with phase computation by addition,” Comput. Phys. Commun. 138, 44–52 (2001). [CrossRef]
9. T. Yamaguchi and H. Yoshikawa, “Computer-generated image hologram,” Chin. Opt. Lett. 9, 120006 (2011). [CrossRef]
11. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18, 19504–19509 (2010). [CrossRef] [PubMed]
13. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23, 22149–22161 (2015). [CrossRef] [PubMed]
14. C. K. Chui, ed., An Introduction to Wavelets (Academic Press, 2014).
15. D. L. Donoho, “De-noising by soft-thresholding,” IEEE Trans. Inf. Theory 41, 613–627 (1995). [CrossRef]
17. M. Liebling, T. Blu, and M. Unser, “Fresnelets: new multiresolution wavelet bases for digital holography,” IEEE Trans. Image Process. 12, 29–43, (2003). [CrossRef]
18. M. Liebling and M. Unser, “Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion,” J. Opt. Soc. Am. A 21, 2424–2430 (2004). [CrossRef]
19. J. Weng, J. Zhong, and C. Hu, “Digital reconstruction based on angular spectrum diffraction with the ridge of wavelet transform in holographic phase-contrast microscopy,” Opt. Express 16, 21971–21981 (2008). [CrossRef] [PubMed]
20. S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. Mach. Intell. 11, 674–693 (1989). [CrossRef]
21. T. Shimobaba, J. Weng, T. Sakurai, N. Okada, T. Nishitsuji, N. Takada, A. Shiraki, N. Masuda, and T. Ito, “Computational wave optics library for C++: CWO++ library,” Comput. Phys. Commun. 183, 1124–1138 (2012). [CrossRef]