## Abstract

In this study, we presented the acceleration of full-color computer-generated holograms (CGHs) bi using WAvelet ShrinkAge-Based superpositIon (WASABI). The WASABI method uses a wavelet transform. Furthermore, the light wave superposition is calculated by using 3%, 5%, and 10% of the light wave components in wavelet space. The WASABI method is implemented for generating full-color CGHs and is used to further combine the color space conversion from the RGB color space to the YCbCr color space. We report that the WASABI method is 10–33 times faster than the conventional look-up table method and 2–7 times faster than the depth layer method based on fast Fourier transform. Further, the WASABI method in the YCbCr color space is approximately 1.5 times faster than that in the RGB color space.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Electro-holography [1] can reconstruct natural and dynamic three-dimensional (3D) images using computer-generated holograms (CGHs) which are wavefront patterns of light. In electro-holography, it is necessary to increase the speed of CGH calculation for achieving real-time reconstruction and interactive operation system [2–6] such as 3D displays, optical tweezers, and holographic projection. Hence, a fast algorithm of CGH calculation is required for performing the CGH calculation. Based on the expression of 3D objects, several methods such as point-cloud methods [7,8], polygon methods [9,10], holographic stereogram methods [11–14], and depth-layer-based methods [15] are used for obtaining the CGH. These methods are reviewed in [16,17].

Among the aforementioned methods, the point-cloud methods are obtained to offer high flexibility for expressing a 3D object as the aggregation of object points. Further, various point-cloud methods have been used to calculate the CGHs from 3D objects using polygons, holographic stereograms, and depth-layer expressions. The CGHs are calculated using point-cloud methods by superposing the light waves, i.e., point spread functions (PSFs), emitted from each object point. Owing to the fact that the superposition of light waves is the most time-consuming process in the entire CGH calculation, sparsity-based algorithms have recently been proposed [18,19]. To accelerate the superposition process, we proposed a WAvelet ShrinkAge-Based superpositIon (WASABI) [18] method, which uses a wavelet shrinkage procedure. The superposition process is performed by this method using only the strong amplitudes of the wavelet coefficients of PSFs in the wavelet domain. According to the selectivity of the wavelet coefficients that are extracted from the PSFs, the WASABI method can reduce the computational complexity that is associated with the generation of CGHs. Although, the WASABI method is successful in reducing the calculation time of CGHs, it has only been applied to generate a single-color CGH. In the 3D display technologies that use electro-holography, it is important to impart full-colorization to CGHs for reconstructing natural and realistic 3D images [20].

Here, we propose full-color CGH generation using the WASABI method, and the acceleration is performed by converting the WASABI method from RGB color space to YCbCr color space. First, we extend the single-color WASABI method to the red, green and, blue CGH calculation by using the same selectivity of the wavelet coefficients for each color. Additionally, we propose a reduction in the selectivity of wavelet coefficients in the WASABI method using color space conversion. In holographic projection and layer-based CGH calculation, the acceleration of CGH calculation using the YCbCr color space has already been successful [21,22]. The YCbCr color space uses human visual characteristics; human eyes are sensitive to the variation in luminance but insensitive to the variation in color differences. Even though the color-difference components Cb and Cr are down-sampled, human eyes cannot perceive a deterioration in image quality while observing the reconfigured RGB image. Thereby, the WASABI method can be accelerated by reducing the selectivity of the wavelet coefficients of Cb and Cr when compared with the luminance component Y.

## 2. Method

#### 2.1 Full-color CGH calculation

In the point-cloud methods, a monochrome CGH is obtained by superposing the complex amplitudes of light waves that are emitted from the point light sources constituting a 3D object. Further, the complex amplitude of the CGH can be expressed as follows [2,18,19]:

The computational complexity of the CGH generation is decided by the average radius of the PSFs on the CGH and the number of object points. The average radius $\overline{W}$ is determined as follows [18]:

#### 2.2 WASABI method

This subsection summarizes the WASABI method. The WASABI method comprises the following steps [18].

- 1. Pre-calculation: the PSFs at distance ${z}_{j}$ are pre-calculated and transformed from the real domain to the wavelet domain using the wavelet transform. Further, the wavelet coefficients of the transformed PSFs are sorted in the descending order of the wavelet coefficients, and the top ${N}_{s}$ wavelet coefficients are stored in a look-up table (LUT).
- 2. Superposition: the reduced wavelet coefficients that have been obtained in the wavelet domain are subjected to superposition.
- 3. Inverse transformation: the result of the superposition is transformed from the wavelet domain to the real domain using the inverse wavelet transform, resulting in the CGH being obtained.

Here, the number of wavelet coefficients is defined as ${K}_{s}(z)=s\times \pi {W}_{z}^{2}$, where *s* denotes the selectivity. Further, the computational complexity of the WASABI method is determined by the superposition step as well as the inverse transformation step and can be expressed as $\text{}O\left(L\overline{{K}_{s}}+{N}_{h}^{2}\right)=O\left(sL{\overline{W}}^{2}+{N}_{h}^{2}\right),$ where the resolution of the CGH is ${N}_{h}\times {N}_{h}$ pixels and where $\overline{{K}_{s}}$ is the average number of wavelet coefficients. If the number of object points is sufficiently large, the computational complexity of the inverse wavelet transform $O({N}_{h}^{2})$ can be ignored. Thus, the total computational complexity of the WASABI method can be approximated as $O(sL{\overline{W}}^{2}+{N}_{h}^{2})\approx O\left(sL{\overline{W}}^{2}\right)$.

Figure 1 shows the schematic of a full-color CGH generation using the WASABI method. Here, we used RGB-D images [23] as full-color 3D objects. The RGB-D images are observed to contain a texture map and a depth map. The texture map represents the amplitudes of red, green, and blue lights from the 3D object, and the depth map represents the distance from the image plane to the 3D object. In the pre-calculation step, the wavelengths of the PSFs are set to the red, green, and blue wavelengths, and the LUT for each color that was required to store the reduced wavelet coefficients are prepared. Thereby, the complex amplitude $\psi (m,n)$ for each color in the wavelet domain is calculated the WASABI method in RGB color space as follows:

where ${v}_{{z}_{j},k}$ is the wavelet coefficient stored in the LUT ${V}_{z}{}_{j}$, $({m}_{k},{n}_{k})$ are the position of the wavelet coefficient ${v}_{{z}_{j},k}$ in the wavelet domain, and $\alpha $ is the wavelet decomposition level. Finally, in the inverse transformation step, the CGH for each color is generated by transforming each color complex amplitude $\psi (m,n)$ from the wavelet domain to the real domain.

#### 2.3 WASABI method using color conversion

To accelerate the generation of full-color CGH, we first apply color space conversion to the conventional color CGH calculation presented in Eq. (2). Further, the full-color CGH calculation is accelerated by applying the color space conversion to the color WASABI method presented in Eq. (4). The conversion of RGB components to YCbCr components can be expressed as follows:

In our previous works [21,22], we proposed the color hologram calculation using the color space conversion. The previous works down-sampled the Cb and Cr image sizes of the original image and hologram. The down-sampling of the image sizes helped to accelerate color hologram calculation; however, it induced chromatic aberrations. Whereas, we propose new color hologram calculation in which we decrease the number of object points for Cb and Cr components, not down-samples the image size. It is not induced chromatic aberrations. We present the color hologram calculation in Eqs. (8) to (10).

The full-color complex amplitudes of Eq. (2) that pass through the YCbCr color space are calculated as follows:

Subsequently, we apply color space conversion to the color WASABI method presented in Eq. (4). Figure 2 shows the schematic of the WASABI method in the YCbCr color space. Since the superposition step in the wavelet domain that is expressed by Eq. (4) is equivalent to that in the real domain expressed by Eq. (2), the calculation of complex amplitude calculation in the wavelet domain that passes through the YCbCr color space can be expressed as follows:

Initially, the computational complexity required by Eq. (10) is three times that required by Eq. (4). However, the calculation time of Eq. (10) does not become three times that of Eq. (4) because the LUTs and 3D object data can be re-used for as long as they are read only from the memory. Hence, the calculation of full-color CGH can be accelerated using the WASABI method when the selectivity of the wavelet coefficients of the color-difference components Cb and Cr is sufficiently reduced as compared to the increase in computational complexity for performing color space conversion.

## 3. Results

Initially, we compared the computational performances of the generation of full-color CGH for the conventional methods and the WASABI method in RGB color space. We used the conventional LUT method [7] presented in Eq. (2) and band-limited double-step Fresnel diffraction (BL-DSF) [15], which generated the CGH with the depth-layer-based method by diffraction calculation using the fast Fourier transform (FFT), as the conventional methods. The calculation conditions were as follows: the wavelengths of red, green, and blue lights were 633, 532, and 450 nm, respectively; the CGH size was $2,048\times 2,048$ pixels; and the sampling pitch was 8 $\text{\mu m}$.

Figure 3 shows the RGB-D images which we used as the 3D object data. The size of the RGB-D images [23] was $512\times 512$ pixels, i.e., it comprised of $512\times 512=262,144$ point light sources. In BL-DSF, we resized the size of the original image to $2,048\times 2,048$ pixels. The depth range ${z}_{j}\in [0.0\text{cm},5.0\text{cm}]$ was divided into 256 slices. The distance from the CGH plane to the 3D object was 5.0 cm. In the WASABI method, we used Daubechies 16 [24,25] as the wavelet basis function and verified the calculation time and image quality when the selectivity of $s=3\%,5\%,10\%$. The selectivities required for obtaining the CGHs of red, green, and blue lights were equal (${s}_{R}={s}_{G}={s}_{B}=s$). In the calculation setup, we used a computer with Intel Core i7 6800K as a CPU, a memory of 32 GB, a Microsoft Windows 10 operating system, and a Microsoft Visual Studio C + + 2015 compiler. All the calculations were parallelized across 6 CPU cores.

Table 1 shows the calculation times for the conventional methods and the WASABI method with respect to the RGB-D image “Papillon.” When we used Eq. (2) and BL-DSF as the conventional methods, the calculation times were 492 and 110 s, respectively. The calculation times for the WASABI methods with $s=10\%$,$s=5\%$, and $s=3\%$ are 47.3, 23.7, and 14.5 s, respectively; thus, the instances of WASABI methods with $s=10\%$, $s=5\%$, and $s=3\%$ were 10.4, 20.7, and 33.8 times faster than Eq. (2), and 2.33, 4.64 and 7.58 times faster than BL-DSF, respectively. Theoretically, the acceleration of the WASABI method was inversely proportional to the selectivity. Thus, we confirmed that the speed-up rate was obtained theoretically.

Figure 4 shows the reconstructed images of “Papillon” for the conventional methods and the WASABI method in the RGB color space. “Papillon” that was recorded on the full-color CGH was reconstructed at 6.0 to 8.0 cm. The Papillon was focused at 6.0 cm, and the background leaves were focused at 8.0 cm. We measured the peak signal-to-noise ratio (PSNR) of the reconstructed images between the conventional method of Eq. (2) and the WASABI method, further, the PSNR values of the WASABI methods with $s=10\%$, $s=5\%$, and $s=3\%$ are 22.3, 21.4, and 21.3 dB, respectively.

Despite PSNRs being around 20 dB, the apparent image quality did not decrease so much. The reason is that a part of the high-frequency component of the PSF obtained by wavelet shrinkage is lost. The disappearance of the high-frequency component causes an increase in the depth of focus of the reconstructed image when observing the hologram from the front. Therefore, compared with the reconstructed image obtained by Eq. (2), the reconstructed image obtained by RGB/YCbCr WASABI methods will decrease blur in non-focused regions. This results in not feeling deterioration of apparent image quality compared with the deterioration of PSNRs.

Table 2 shows the calculation times for the conventional methods and the WASABI method for the RGB-D image “Buddha.” The calculation times for the conventional methods were 564 and 111 s. Further, the calculation times for the WASABI methods with $s=10\%$,$s=5\%$, and $s=3\%$ were 52.8, 27.8, and 17.1 s, respectively. The average depth distance of “Papillon” was 2.4 cm and that of “Buddha” was 2.8 cm. Since the number of wavelet coefficients increased with an increase in the depth distance, the calculation times for “Buddha” were longer than those for “Papillon.”

Figure 5 shows the reconstructed images of “Buddha” for the conventional methods and the WASABI method in the RGB color space. The dice was focused at 6.5 cm, the Buddha was focused at 7.5 cm. The PSNR values for the reconstructed images between the conventional method of Eq. (2) and the WASABI methods with $s=10\%$, $s=5\%$, and $s=3\%$ were 22.5, 21.0 and 20.8 dB, respectively.

Further, we verified the WASABI method in the YCbCr color space. Figure 6 shows the calculation times for the WASABI method in the RGB color space and in the YCbCr color space. We set the selectivity of the WASABI method to $s=3\%,5\%,10\%$. The selectivities in the RGB color space were ${\text{s}}_{R}={s}_{G}={s}_{B}=s$ . In the WASABI method in the YCbCr color space, the selectivity of the wavelet coefficients of the luminance component Y was set to ${s}_{Y}=s$. The full-color CGH calculation was performed under the 4 conditions of “${s}_{Cb}={s}_{Cr}={s}_{Y}$,” “${s}_{Cb}={s}_{Cr}={s}_{Y}/2$,” “${s}_{Cb}={s}_{Cr}={s}_{Y}/4$,” and “${s}_{Cb}={s}_{Cr}={s}_{Y}/8$”. In the case of ${s}_{Cb}={s}_{Cr}={s}_{Y}$, the calculation time of the WASABI method in the YCbCr color space was longer than that of the WASABI method in the RGB color space because of the computational complexity of color space conversion. However, the WASABI methods in the YCbCr color space with “${s}_{Cb}={s}_{Cr}={s}_{Y}/2$,” “${s}_{Cb}={s}_{Cr}={s}_{Y}/4$,” and “${s}_{Cb}={s}_{Cr}={s}_{Y}/8$” were averaged 1.23, 1.58, and 1.82 faster than those in the RGB color space for “Papillon,” and were averaged 1.21, 1.52, and 1.81 faster than those in the RGB color space for “Buddha.”

Further, Fig. 7 and Fig. 8 show the reconstructed images for the conventional method and the WASABI method in the RGB color space and the YCbCr color space. When the wavelet coefficients of the color-difference components Cb and Cr were reduced to $1/8$ th of the wavelet coefficients of the luminance components Y, the PSNRs between the reconstructed images obtained by the reduced color-difference components and those obtained by the conventional method were slightly observed to deteriorate by approximately 3dB in “Papillon” and by approximately 2dB in “Buddha.” Thus, we confirmed that the WASABI method, which reduced the selectivities of the wavelet coefficients of the color-difference components Cb and Cr in the YCbCr color space, can calculate the full-color CGH faster than the WASABI method in the RGB color space without any huge deterioration in the quality of the reconstructed image.

In order to investigate the influence of the color space conversion on the brightness of the reconstructed image, we examined the brightness of the reconstructed image. The brightness is defined as the total sum of the intensities in the reconstructed image. The results are summarized in Tables 3 and 4. The unit of the brightness is an arbitrary unit. Compared to the conventional method of Eq. (2), the brightness of WASABI for each selectivity is the almost same to the conventional method of Eq. (2).

As shown in Fig. 9, we show high-resolution reconstructed images of “Papillon” from 8,192 × 8,192-pixel hologram using the conventional method of Eq. (2) and the WASABI method in the RGB color space. The PSNR values for the reconstructed images between the conventional method and the WASABI methods with s = 10%, s = 5%, and s = 3% were 28.9, 26.7 and 25.8 dB, respectively. We confirmed that the PSNRs improved by increasing the resolution of the hologram and the input image.

We show 2,048 × 2,048-pixel CGHs of “Papillon” and its amplitude spectra in Fig. 10. Figures 10(a) and 10(b) shows the CGHs generated from 2,048 × 2,048-pixel “Papillon” using the conventional method of Eq. (2) and the WASABI method with the selectivity of 10%, respectively. Figures 10(c) and 10(d) shows the CGHs generated from 512 × 512-pixel “Papillon” using the WASABI methods with the selectivity of 10% and 5%, respectively.

Figure 11 shows the horizontal plot for the spectra of each CGH. Each spectrum has the same spatial bandwidth. In Fig. 10 (c) and 10(d), because “Papillon” was decimated by a quarter, the spectra of the decimated “Papillon” has only a quarter spatial bandwidth compared to the original spectra. However, as shown in Fig. 11, we cannot observe the decimation effect in the spectra of the CGHs because of the random phase effect.

## 4. Conclusion

In this study, we demonstrated the acceleration of the generation of full-color CGH using the WASABI method. In our previous works [21,22], we proposed a color hologram calculation using the color space conversion, in which we could accelerate color hologram calculation; however, it induced chromatic aberrations. Whereas, the proposed method of Eqs. (8) and (10) could accelerate color hologram calculation without chromatic aberrations. Furthermore, the selectivity of the WASABI method decreases by converting from the RGB color space to the YCbCr color space in full-color CGH calculation. Although the proposed method degrades image quality slightly, it can generate color hologram at high speed. Therefore, real-time reconstruction [6] of full-color holographic images can be achieved by implementing the WASABI method using high-speed processing devices such as graphics processing units [5] and large-scale field-programmable gate arrays [26].

In current WASABI methods, the input image of 2,048 × 2048 pixels was down-sampled to 512 × 512 images. This down-sampling is due to the inherent problem of the wavelet transform, i.e. the shift variance property of the wavelet transformation. In this paper, the wavelet transform is performed up to the level 2. That is, the spatial resolution at the level 2 drops to 1/4 compared to the level 0. Therefore, in the consideration of this spatial resolution degradation, we down-sampled the input image in advance. This problem can be solved by using other wavelet transforms (e.g. stationary wavelet transform) in our future work.

In future, we intend to accelerate the WASABI method in the YCbCr color space by down-sampling the color-difference components Cb and Cr of the 3D object. Additionally, the image quality will also be improved by optimizing the wavelet coefficients extracted in the wavelet domain.

## Funding

Japan Society for the Promotion of Science (JSPS) KAKENHI (16K00151).

## References

**1. **P. St-Hilaire, S. A. Benton, M. E. Lucente, M. L. Jepsen, J. Kollin, H. Yoshikawa, and J. S. Underkoffler, “Electronic display system for computational holography,” Proc. SPIE **1212**, 174–182 (1990). [CrossRef]

**2. **M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging **2**(1), 28–34 (1993). [CrossRef]

**3. **H. Niwase, N. Takada, H. Araki, H. Nakayama, A. Sugiyama, T. Kakue, T. Shimobaba, and T. Ito, “Real-time spatiotemporal division multiplexing electroholography with a single graphics processing unit utilizing movie features,” Opt. Express **22**(23), 28052–28057 (2014). [CrossRef] [PubMed]

**4. **J.-S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express **23**(14), 18143–18155 (2015). [CrossRef] [PubMed]

**5. **S. Yamada, T. Kakue, T. Shimobaba, and T. Ito, “Interactive holographic display based on finger gestures,” Sci. Rep. **8**(1), 2010 (2018). [CrossRef] [PubMed]

**6. **T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. **46**(12), 125801 (2007). [CrossRef]

**7. **S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. **47**(19), D55–D62 (2008). [CrossRef] [PubMed]

**8. **Y. Ogihara and Y. Sakamoto, “Fast calculation method of a CGH for a patch model using a point-based method,” Appl. Opt. **54**(1), A76–A83 (2015). [CrossRef] [PubMed]

**9. **K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. **48**(34), H54–H63 (2009). [CrossRef] [PubMed]

**10. **H. Kim, J. Kwon, and J. Hahn, “Accelerated synthesis of wide-viewing angle polygon computer-generated holograms using the interocular affine similarity of three-dimensional scenes,” Opt. Express **26**(13), 16853–16874 (2018). [CrossRef] [PubMed]

**11. **T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. **15**(11), 2722–2729 (1976). [CrossRef] [PubMed]

**12. **H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. **47**(19), D44–D54 (2008). [CrossRef] [PubMed]

**13. **K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express **19**(10), 9086–9101 (2011). [CrossRef] [PubMed]

**14. **H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express **23**(4), 3901–3913 (2015). [CrossRef] [PubMed]

**15. **N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express **21**(7), 9192–9197 (2013). [CrossRef] [PubMed]

**16. **T. Shimobaba, T. Kakue, and T. Ito, “Review of fast algorithms and hardware implementations on computer holography,” IEEE Trans. Industr. Inform. **12**(4), 1611–1622 (2016). [CrossRef]

**17. **P. W. M. Tsang, T.-C. Poon, and Y. M. Wu, “Review of fast methods for point-based computer-generated holography [Invited],” Photon. Res. **6**(9), 837–846 (2018). [CrossRef]

**18. **T. Shimobaba and T. Ito, “Fast generation of computer-generated holograms using wavelet shrinkage,” Opt. Express **25**(1), 77–87 (2017). [CrossRef] [PubMed]

**19. **D. Blinder and P. Schelkens, “Accelerated computer generated holography using sparse bases in the STFT domain,” Opt. Express **26**(2), 1461–1473 (2018). [CrossRef] [PubMed]

**20. **M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, “Simple holographic projection in color,” Opt. Express **20**(22), 25130–25136 (2012). [CrossRef] [PubMed]

**21. **T. Shimobaba, Y. Nagahama, T. Kakue, N. Takada, N. Okada, Y. Endo, R. Hirayama, D. Hiyama, and T. Ito, “Calculation reduction method for color digital holography and computer-generated hologram using color space conversion,” Opt. Eng. **53**(2), 024108 (2014). [CrossRef]

**22. **D. Hiyama, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of color computer-generated hologram from RGB-D images using color space conversion,” Opt. Commun. **340**, 121–125 (2015). [CrossRef]

**23. **S. Wanner, S. Meister, and B. Goldluecke, “Datasets and benchmarks for densely sampled 4d light fields,” In VMV, 225–226 (2013).

**24. **W. K. Pratt, *Digital Image Processing: PIKS Scientific Inside (4th Ed.)* (Wiley-Interscience, 2007).

**25. **C. K. Chui, ed., *An Introduction to Wavelets* (Academic, 2014).

**26. **T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. **1**(4), 254–259 (2018). [CrossRef]