Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accelerating hologram generation using oriented-separable convolution and wavefront recording planes

Open Access Open Access

Abstract

Recently, holographic displays have gained attention owing to their natural presentation of three-dimensional (3D) images; however, the enormous amount of computation has hindered their applicability. This study proposes an oriented-separable convolution accelerated using the wavefront-recording plane (WRP) method and recurrence formulas. We discuss the orientation of 3D objects that affects computational efficiency, which is overcome by reconsidering the orientation, and the suitability of the proposed method for hardware implementations.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Holographic displays are promising three-dimensional (3D) displays that have attracted attention because they satisfy the factors of human stereoscopic vision [1,2]. To popularize holographic displays, two major limitations must be overcome: the large spatial bandwidth products of holograms and the enormous computational costs of hologram calculations [3]. The two representative holographic displays are holographic wide-viewing-angle display [49] and near-eye display [1013]. The former requires a very large spatial bandwidth product; however, it may become a prevalent 3D display in the future. For instance, for 3D reconstructed images with a field of view of 10 cm $\times$ 10 cm and a viewing angle of approximately 37$^{\circ }$ at a wavelength of 633 nm, the hologram pixels must contain 10 gigapixles ($100,000 \times 100,000$ pixels). This indicates the challenges of developing spatial light modulators (SLMs) and enormous computational costs. In contrast, near-eye displays can reduce the spatial bandwidth product to the same level as that of current SLMs, but the issue of computational cost prevails.

To address the enormous computational cost, several acceleration algorithms have been proposed. Hologram calculations are mainly categorized into point-cloud [1419], polygon [2025], layer [2629], light field [3035], and deep learning [3640] approaches. In recent years, the approaches using deep learning as an ultra-high-speed hologram generator have attracted significant attention; in particular, they have shown unprecedented performances in layer-based holograms for near-eye displays. However, it has not been shown whether deep learning approaches work well in holographic displays, which require large spatial bandwidth products [41].

In this study, we proposed a point-cloud hologram calculation using separable convolution with wavefront recording planes (WRPs) [18] and a recurrence algorithm [42,43]. Generally, point-cloud algorithms are time-consuming but highly versatile because they can be applied to any 3D format, such as polygons, layers, and light-field 3D formats. In contrast, the other algorithms are not versatile as they adhere to an inherent 3D format. The proposed method employs the separable property of Fresnel diffraction, referred to as separable convolution [11,17], which is improved considering the orientation of 3D objects, i.e., oriented-separable convolution. To further accelerate the computations, oriented-separable convolution is combined with WRPs and a recurrence algorithm. Another advantage of the proposed method is its suitability for hardware implementations, such as field-programmable gate arrays (FPGAs).

The rest of the paper is structured as follows. Section 2 describes the proposed method; Section 3 shows the calculation results and computational efficiency between the proposed and conventional methods and discusses the application in hardware implementation. The last section concludes this study.

2. Proposed method

The point-cloud method calculates a hologram $u_h(x_h,y_h)$ using point-cloud data with $N$ points via convolution as follows:

$$u(x_h,y_h) = \sum_{j=1}^{N} \frac{a_j}{r_{hj}}\exp \left(i \frac{2 \pi}{\lambda} r_{h j} \right) $$
$$ \approx\sum_{j=1}^{N} \frac{a_j}{z_j} \exp \left(\frac{i \pi}{\lambda z_j} \left( (x_h-x_j)^{2} +(y_h-y_j)^{2} \right) \right), $$
where $i=\sqrt {-1}$, $r_{hj}$ is the distance between $h$-th hologram pixels and $j$-th object points, $\lambda$ denotes the light wavelength, $(x_j,y_j,z_j)$ and $a_j$ denote the coordinates and amplitudes of the object points, respectively. The second equation is obtained using the Fresnel approximation of $r_{hj}$. The calculation complexity is expressed as $O(NWH)$, where the number of hologram pixels is denoted by $W \times H$.

Figure 1 illustrates a conventional separable convolution [11,17]. Considering a line on the 3D object along the y-axis at $x_j=x_c$ and $z_j=z_c$, Eq. (2) can be separated into the horizontal and vertical terms as:

$$u(x_h,y_h; x_c, z_c) = \exp \left(\frac{i \pi}{\lambda z_{c}} (x_h-x_{c})^{2} \right) \sum_{j=1}^{N} \frac{a_j}{z_c} \exp \left(\frac{i \pi}{\lambda z_{c}} (y_h-y_j)^{2} \right).$$

 figure: Fig. 1.

Fig. 1. Outline of a hologram calculation using the separable convolution.

Download Full Size | PDF

A partial hologram calculated from the extracted point cloud at $z_c$ can be calculated as follows:

$$u(x_h,y_h; z_c) = \sum_{x_c \in [x_1, x_2]} u(x_h,y_h; x_c, z_c),$$
where $[x_1, x_2]$ is the horizontal range of a 3D object. The final hologram $u(x_h,y_h)$ can be obtained as:
$$u(x_h,y_h) = \sum_{z_c \in [z_1,z_2]} u(x_h,y_h; z_c),$$
where $[z_1,z_2]$ is the depth range of a 3D object.

2.1 Oriented-separable convolution

This separable property accelerates Eq. (2). However, the orientation of 3D objects strongly affects computational efficiency. We consider the difference in computational efficiency using two simple examples with different orientations having the same number of object points $N = N_A \times N_B$, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Examples of horizontally- and vertically-oriented objects. Green vectors indicate the major orientation of the object.

Download Full Size | PDF

The computational complexity of the separable convolution in both cases is expressed as:

$$O((H N_B +WH) N_A) = O(HN+N_A WH).$$

For instance, assuming the number of hologram pixels $W \times H = 4000 \times 4000$, and $N_A \times N_B = 2000 \times 10$ pixels in the left case and $N_A \times N_B = 10 \times 2000$ pixels in the right case, the computational complexity of Eq. (2) is approximately $3.2 \times 10^{11}$. The complexities in the left and right cases are approximately $3.2 \times 10^{10}$ and $2.4 \times 10^{8}$, respectively. Therefore, the orientation must be considered even for the same number of object points.

Using this property, we propose an oriented-separable convolution method. It becomes more efficient when the orientation vectors of 3D objects, indicated by green arrows, align to the y-axis. The principal component analysis (PCA) was used to detect the orientation vector $\mathbf {v}=(x_v, y_v)$. The x- and y-axis coordinates of 3D objects at a particular $z_c$ are extracted as a point cloud denoted by the matrix $P_{z_c} \in \mathbb {R}^{N_{r} \times 2}$, where $N_{r}$ indicates the number of extracted points. The orientation vector was obtained by solving the eigenvalue problem as follows:

$$S {\mathbf v} = \lambda_1 {\mathbf v},$$
where $S \in \mathbb {R}^{2 \times 2}$ and $\lambda _1$ denote the covariance matrix of $P_{z_c}$ and maximum eigenvalue of $S$, respectively. The extracted points were rotated to align with the y-axis using the angle $\theta$ of the orientation vector as:
$$P_\theta = \mathscr{R}_{\frac{\pi}{2} - \theta} \{ P_{z_c}\},$$
where $\mathscr {R}_\theta$ denotes the rotational operation at angle $\theta$. Furthermore, the hologram $u_\theta (x_h, y_h; x_c, z_c)$ of the rotated point-cloud data $P_\theta$ was calculated using Eq. (4). Hence, $u_\theta (x_h,y_h)$ was inverse-rotated at angle $\hat {\theta }=-(\frac {\pi }{2} - \theta )$ as:
$$u(x_h,y_h; z_c) = \mathscr{R}_{\hat{\theta}}\{u_\theta(x_h,y_h)\}.$$

These steps were repeated for all $z_c$ to obtain the final hologram.

We can reduce the computational complexity of PCA to randomly sample the point cloud $P_{z_c}$ by limiting it to 10,000 points if $N_r > 10,000$. The limitation value is determined empirically. In addition, the rotation angle of the extracted point cloud is confined to either $\theta =\pi /2$ radian if $x_v \leq y_v$ or $\theta =0$ radian if $x_v > y_v$ because the arbitrary angle $\theta$ requires the maximum calculation window of $\sqrt {2} W$ (or $\sqrt {2} H$ if $H>W$). The additional costs for PCAs and rotations are low and negligible.

2.2 Combination with the WRP method

Although the oriented-separable convolution reduces the computational complexity, WRP methods [18,4446] accelerate it further. They reduce the computational window by placing virtual planes, called WRP, in the vicinity of 3D objects. The size of the calculation window is determined by the distance $z$ between an object point and the WRP, as follows:

$$w_x = 2z \tan(\sin^{{-}1}(\lambda / 2p_x)) $$
$$ w_y = 2z \tan(\sin^{{-}1}(\lambda / 2p_y)), $$
where $p_x$ and $p_y$ denote the sampling pitches in the horizontal and vertical directions, respectively. WRP methods can reduce the entire calculation window of $x_h \in [-W/2,W/2)$ and $y_h \in [-H/2,H/2)$ to a smaller one of $x_h \in [x_j-w_x/2,x_j+w_x/2)$ and $y_h \in [y_j-w_y/2,y_j+w_y/2)$, respectively. Combining the WRP method with oriented-separable convolution is straightforward. After recording all complex amplitudes of a point cloud on a WRP, a diffraction calculation was performed to obtain a final hologram.

2.3 Combination with the recurrence formula

Defining $\exp \left (\frac {i \pi }{\lambda z_{c}} (x_h-x_{c})^{2} \right )$ in Eq. (3) as $\exp (i \Gamma _n)$, the phase of $\exp (i \Gamma _n)$ can be further accelerated using the recurrence algorithm [42,43]:

$$\Gamma_{n+1} = \Gamma_n +\delta_n $$
$$ \delta_{n+1} = \delta_n + \delta, $$
where the initial phase $\Gamma _0= \delta _x^{2} \times p_j$, $\delta _x = w_x - x_j$, $\delta _0 = (2\delta _x^{2} + p_x^{2}) \times p_j$, and $p_j=1/(2 \lambda z_j)$.

Since the recurrence algorithm uses only two additions to compute the phase, it has the advantage of requiring fewer hardware resources and computations on a CPU. We used the recurrence algorithm proposed in [43] because it has fewer operations than [42]. The recurrence algorithm can also be applied to $\exp \left (\frac {i \pi }{\lambda z_{c}} (y_h-y_{c})^{2} \right )$ in Eq. (3).

3. Results and discussion

Figure 3 shows the original 3D objects represented by the RGB-D images [47,48]. Each column corresponds to a different 3D object, and the top and bottom rows provide the amplitude and depth information using which, the point cloud data can be constructed without significant computational costs. Along with RGB-D, the proposed method can be applied to other 3D formats, such as polygons and light fields.

 figure: Fig. 3.

Fig. 3. Original 3D objects represented by RGB-D images [47,48]. Each column corresponds to a different 3D object, and the top and bottom rows provide the amplitude and depth information.

Download Full Size | PDF

The calculation conditions were as follows: the sampling pitch on a hologram was 3.74 $\mu$m, and the wavelength was 633 nm. The thickness of the 3D scene was 2 cm, and the distance between the hologram and the nearest object point was 10 cm. The computational environment consisted of a CPU of AMD Ryzen 7 3700X with 32 GB memory, Microsoft Windows 10 (64-bit version) operating system, Microsoft Visual C++ 2022 compiler, and 16 CPU threads. All calculations were performed using our wave-optics library [49].

Tables 1, 2 and 3 show the calculation times of the conventional and proposed methods for the number of hologram pixels in $1024 \times 1024$, $2048 \times 2048$, and $4096 \times 4096$. Conventional methods include the layer method [26,28] and separable convolution [11,17] in Eq. (5). The layer method uses the angular spectrum method to propagate each layer to the target plane, and fast Fourier transforms were performed in angular spectrum methods using the FFTW library [50]. The calculation times are listed as the averages of five calculations.

Tables Icon

Table 1. Calculation times in the hologram size of $1024 \times 1024$ pixels. The number of object points for “Papillon,” “Table,” and “Tower” are 257193, 257338, and 257709, respectively. The unit of calculation time is seconds.

Tables Icon

Table 2. Calculation times in the hologram size of $2048 \times 2048$ pixels. The number of object points for “Papillon,” “Table,” and “Tower” are 1027328, 1029007, and 1030262, respectively. The unit of calculation time is seconds.

Tables Icon

Table 3. Calculation times in the hologram size of $4096 \times 4096$ pixels. The number of object points for “Papillon,” “Table,” and “Tower” are 4110805, 4117381, and 4122527, respectively. The unit of calculation time is seconds.

The results showed that the proposed methods were faster than the others, and for all hologram sizes, in particular, the proposed method with the WRP and recurrence algorithm was the fastest. Note that when using the 45-degree angle in the proposed method, the calculation speed could be worse than the conventional separation convolution. The FFTW library used in the layer method was highly optimized because of CPU threading, efficient usage of cache memory, and single instruction/multiple data (SIMD) instructions. In contrast, the proposed method only uses CPU threading; therefore, it may be accelerated further. The speedup of the proposed method decreased as the number of hologram pixels increased, due to programming problems (especially inefficient use of cache memory). We could improve this problem by using techniques such as cache blocking.

We used 32 depth levels for all calculations. As the depth level is increased, the computation time for the layered method is proportional to the depth level, but the computation time for the proposed method increases only slightly because each layer creates more sparse object points.

Figures 4, 5, and 6 show the numerical reconstructions from the $4096 \times 4096$-pixel holograms using the conventional method (denoted as ”Conv.”) and proposed method (denoted as ”Prop.”). Random phases were applied to each object point. The structural similarity (SSIM) values between the layer and proposed methods are shown above each reconstruction of the latter. All the reconstructed images exhibited an acceptable depth of field and image quality.

 figure: Fig. 4.

Fig. 4. Numerical reconstructions of ”Papillon.” The SSIM values between the layer and proposed methods are shown above each reconstruction of the latter.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Numerical reconstructions of ”Table.” The SSIM values between the layer and proposed methods are shown above each reconstruction of the latter.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Numerical reconstructions of ”Tower.” The SSIM values between the layer and proposed methods are shown above each reconstruction of the latter.

Download Full Size | PDF

Figure 7 shows optical reconstruction. The holograms were calculated by employing the proposed method, and displayed on the SLM of Holoeye GAEA-2 with a pixel pitch of 3.74 $\mu m$ and a wavelength of 633 nm. Reconstructed images were captured using a Canon EOS 7D camera and a rotating diffuser was used to reduce the speckle noise. Detailed information on the optical setup is provided in [51]. The optical reconstruction results agreed well with the simulation results.

 figure: Fig. 7.

Fig. 7. Optical reconstructions using the proposed method. From left to right, the reconstruction distances from the hologram are shown in ascending order (Visualization 1, Visualization 2, and Visualization 3).

Download Full Size | PDF

3.1 Discussion

Using FPGAs, special-purpose computers for holography, referred to as holographic reconstruction (HORN) [5254], were developed. All HORN machines calculate holograms based on Eq. (2) with an enormous computational amount; however, they have been designed to accelerate hologram calculations using fully-pipelined processing, which is difficult for CPUs and graphics processing units, and high-parallelization using compact computational units designed by the recurrence algorithm [42,43]. The latest HORN machine achieved computation speeds approximately five times faster than a graphics processing unit.

Consider designing the next generation of HORN using the proposed method. The oriented-separable convolution described in subsection 2.1 uses PCAs to detect major orientation vectors, which is difficult to implement in FPGAs. To overcome this, we precalculate a PCA to detect the orientation vectors for all object points on a host computer. The entire 3D point cloud is rotated according to the orientation vector, and the new HORN machine performs Eq. (5) using the WRP method. Finally, the host computer inversely rotates the calculated hologram to obtain the final hologram.

The pipeline circuits in next-generation HORN can be configured using the same architecture as that of previous HORNs. For calculating Eq. (3), the computational circuits for the y- and x-axis terms should be developed separately. The new HORN machine calculates $\sum _{j=1}^{N} a_j \exp \left (\frac {i \pi }{\lambda z_{c}} (y_h-y_j)^{2} \right )$, which is equivalent to a one-dimensional version of the conventional HORN machines; therefore, the circuit can be designed using full-pipeline processing and high parallelization. Subsequently, the new HORN machine performs $\exp \left (\frac {i \pi }{\lambda z_{c}} (x_h-x_{c})^{2} \right )$, which is also designed using full-pipeline processing and high parallelization. The new HORN machine will achieve speedups of two to three orders of magnitude over current HORNs.

4. Conclusion

This study proposed an oriented-separable convolution with WRP and recurrence algorithms. The proposed method surpassed the computational efficiency of layer methods and conventional separable convolution while maintaining acceptable image quality, and its suitability for hardware implementation was discussed. In future studies, a new HORN machine based on the proposed method may be developed.

Funding

Japan Society for the Promotion of Science (22H03607, 19H01097); IAAR Research Support Program, Chiba University.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. St-Hilaire, S. A. Benton, M. E. Lucente, M. L. Jepsen, J. Kollin, H. Yoshikawa, and J. S. Underkoffler, “Electronic display system for computational holography,” in Practical Holography IV, vol. 1212 (SPIE, 1990), pp. 174–182.

2. F. Yaraş, H. Kang, and L. Onural, “State of the art in holographic displays: a survey,” J. Disp. Technol. 6(10), 443–454 (2010). [CrossRef]  

3. D. Blinder, T. Birnbaum, T. Ito, and T. Shimobaba, ““The state-of-the-art in computer generated holography for 3d display,” Light: Advanced Manufacturing (2022).

4. F. Yaraş, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19(10), 9147–9156 (2011). [CrossRef]  

5. T. Kozacki, G. Finke, P. Garbat, W. Zaperty, and M. Kujawińska, “Wide angle holographic display system with spatiotemporal multiplexing,” Opt. Express 20(25), 27473–27481 (2012). [CrossRef]  

6. H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi, and T. Senoh, “Large size three-dimensional video by electronic holography using multiple spatial light modulators,” Sci. Rep. 4(1), 4000 (2014). [CrossRef]  

7. J. Park, K. Lee, and Y. Park, “Ultrathin wide-angle large-area digital 3d holographic display using a non-periodic photon sieve,” Nat. Commun. 10, 1–8 (2019). [CrossRef]  

8. Y. Takaki and M. Nakaoka, “Scalable screen-size enlargement by multi-channel viewing-zone scanning holography,” Opt. Express 24(16), 18772–18781 (2016). [CrossRef]  

9. B. Lee, D. Yoo, J. Jeong, S. Lee, D. Lee, and B. Lee, “Wide-angle speckleless dmd holographic display using structured illumination with temporal multiplexing,” Opt. Lett. 45(8), 2148–2151 (2020). [CrossRef]  

10. E. Murakami, Y. Oguro, and Y. Sakamoto, “Study on compact head-mounted display system using electro-holography for augmented reality,” IEICE Trans. Electron. E100.C(11), 965–971 (2017). [CrossRef]  

11. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017). [CrossRef]  

12. J.-H. Park and S.-B. Kim, “Optical see-through holographic near-eye-display with eyebox steering and depth of field control,” Opt. Express 26(21), 27076–27088 (2018). [CrossRef]  

13. C. Chang, K. Bang, G. Wetzstein, B. Lee, and L. Gao, “Toward the next-generation vr/ar optics: a review of holographic near-eye displays from a human-centric perspective,” Optica 7(11), 1563–1578 (2020). [CrossRef]  

14. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

15. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” in Practical Holography VII: Imaging and Materials, vol. 1914 (SPIE, 1993), pp. 25–31.

16. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

17. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. A. Tanjung, C. Tan, and T.-C. Chong, “Fast cgh computation using s-lut on gpu,” Opt. Express 17(21), 18543–18555 (2009). [CrossRef]  

18. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

19. D. Blinder and P. Schelkens, “Phase added sub-stereograms for accelerating computer generated holography,” Opt. Express 28(11), 16924–16934 (2020). [CrossRef]  

20. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20(9), 1755–1762 (2003). [CrossRef]  

21. T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007). [CrossRef]  

22. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holograms from three dimensional meshes using an analytic light transport model,” Appl. Opt. 47(10), 1567–1574 (2008). [CrossRef]  

23. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]  

24. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). [CrossRef]  

25. F. Wang, T. Shimobaba, Y. Zhang, T. Kakue, and T. Ito, “Acceleration of polygon-based computer-generated holograms using look-up tables and reduction of the table size via principal component analysis,” Opt. Express 29(22), 35442–35455 (2021). [CrossRef]  

26. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]  

27. J.-S. Chen and D. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015). [CrossRef]  

28. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

29. H. Zhang, L. Cao, and G. Jin, “Computer-generated hologram with occlusion effect using layer-based processing,” Appl. Opt. 56(13), F138–F143 (2017). [CrossRef]  

30. T. Yatagai, “Stereoscopic approach to 3-d display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]  

31. Y. Sando, M. Itoh, and T. Yatagai, “Color computer-generated holograms from projection images,” Opt. Express 12(11), 2487–2493 (2004). [CrossRef]  

32. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]  

33. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple gpus for a 3d live scene by a generation from 4k ip images to 8k holograms,” Opt. Express 20(19), 21645–21655 (2012). [CrossRef]  

34. N. Chen, Z. Ren, and E. Y. Lam, “High-resolution fourier hologram synthesis from photographic images through computing the light field,” Appl. Opt. 55(7), 1751–1756 (2016). [CrossRef]  

35. L. Shi, F.-C. Huang, W. Lopes, W. Matusik, and D. Luebke, “Near-eye light field holographic rendering with spherical waves for wide field of view interactive 3d computer graphics,” ACM Trans. Graph. 36(6), 1–17 (2017). [CrossRef]  

36. R. Horisaki, R. Takagi, and J. Tanida, “Deep-learning-generated holography,” Appl. Opt. 57(14), 3859–3863 (2018). [CrossRef]  

37. H. Goi, K. Komuro, and T. Nomura, “Deep-learning-based binary hologram,” Appl. Opt. 59(23), 7103–7108 (2020). [CrossRef]  

38. L. Shi, B. Li, C. Kim, P. Kellnhofer, and W. Matusik, “Towards real-time photorealistic 3d holography with deep neural networks,” Nature 591(7849), 234–239 (2021). [CrossRef]  

39. S.-C. Liu and D. Chu, “Deep learning for hologram generation,” Opt. Express 29(17), 27373–27395 (2021). [CrossRef]  

40. L. Shi, B. Li, and W. Matusik, “End-to-end learning of 3d phase-only holograms for holographic display,” Light: Sci. Appl. 11(1), 247 (2022). [CrossRef]  

41. T. Shimobaba, D. Blinder, T. Birnbaum, I. Hoshi, H. Shiomi, P. Schelkens, and T. Ito, “Deep-learning computational holography: A review,” Front. Photon. 3, 1 (2022). [CrossRef]  

42. T. Shimobaba and T. Ito, “An efficient computational method suitable for hardware of computer-generated hologram with phase computation by addition,” Comput. Phys. Commun. 138(1), 44–52 (2001). [CrossRef]  

43. T. Shimobaba, S. Hishinuma, and T. Ito, “Special-purpose computer for holography horn-4 with recurrence algorithm,” Comput. Phys. Commun. 148(2), 160–170 (2002). [CrossRef]  

44. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19(16), 15205–15211 (2011). [CrossRef]  

45. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of computer-generated hologram for rgb and depth images using wavefront recording plane method,” Photonics Lett. Pol. 6, 90–92 (2014). [CrossRef]  

46. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015). [CrossRef]  

47. K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Asian conference on computer vision, (Springer, 2016), pp. 19–34.

48. “4D Light Field Dataset,” https://lightfield-analysis.uni-konstanz.de/.

49. T. Shimobaba, J. Weng, T. Sakurai, N. Okada, T. Nishitsuji, N. Takada, A. Shiraki, N. Masuda, and T. Ito, “Computational wave optics library for c++: Cwo++ library,” Comput. Phys. Commun. 183(5), 1124–1138 (2012). [CrossRef]  

50. M. Frigo and S. G. Johnson, “The design and implementation of FFTW3,” Proc. IEEE 93(2), 216–231 (2005). [CrossRef]  

51. D. Yasuki, T. Shimobaba, M. Makowski, J. Suszek, M. Sypek, T. Kakue, and T. Ito, “Real-valued layer-based hologram calculation,” Opt. Express 30(5), 7821–7830 (2022). [CrossRef]  

52. T. Ito, T. Yabe, M. Okazaki, and M. Yanagi, “Special-purpose computer horn-1 for reconstruction of virtual image in three dimensions,” Comput. Phys. Commun. 82(2-3), 104–110 (1994). [CrossRef]  

53. T. Ito, N. Masuda, K. Yoshimura, A. Shiraki, T. Shimobaba, and T. Sugie, “Special-purpose computer horn-5 for a real-time electroholography,” Opt. Express 13(6), 1923–1932 (2005). [CrossRef]  

54. T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, Y. Endo, T. Kakue, T. Shimobaba, and T. Ito, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       The movie corresponds to Fig.7.
Visualization 2       The movie corresponds to Fig.7.
Visualization 3       The movie corresponds to Fig.7.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Outline of a hologram calculation using the separable convolution.
Fig. 2.
Fig. 2. Examples of horizontally- and vertically-oriented objects. Green vectors indicate the major orientation of the object.
Fig. 3.
Fig. 3. Original 3D objects represented by RGB-D images [47,48]. Each column corresponds to a different 3D object, and the top and bottom rows provide the amplitude and depth information.
Fig. 4.
Fig. 4. Numerical reconstructions of ”Papillon.” The SSIM values between the layer and proposed methods are shown above each reconstruction of the latter.
Fig. 5.
Fig. 5. Numerical reconstructions of ”Table.” The SSIM values between the layer and proposed methods are shown above each reconstruction of the latter.
Fig. 6.
Fig. 6. Numerical reconstructions of ”Tower.” The SSIM values between the layer and proposed methods are shown above each reconstruction of the latter.
Fig. 7.
Fig. 7. Optical reconstructions using the proposed method. From left to right, the reconstruction distances from the hologram are shown in ascending order (Visualization 1, Visualization 2, and Visualization 3).

Tables (3)

Tables Icon

Table 1. Calculation times in the hologram size of 1024 × 1024 pixels. The number of object points for “Papillon,” “Table,” and “Tower” are 257193, 257338, and 257709, respectively. The unit of calculation time is seconds.

Tables Icon

Table 2. Calculation times in the hologram size of 2048 × 2048 pixels. The number of object points for “Papillon,” “Table,” and “Tower” are 1027328, 1029007, and 1030262, respectively. The unit of calculation time is seconds.

Tables Icon

Table 3. Calculation times in the hologram size of 4096 × 4096 pixels. The number of object points for “Papillon,” “Table,” and “Tower” are 4110805, 4117381, and 4122527, respectively. The unit of calculation time is seconds.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

u ( x h , y h ) = j = 1 N a j r h j exp ( i 2 π λ r h j )
j = 1 N a j z j exp ( i π λ z j ( ( x h x j ) 2 + ( y h y j ) 2 ) ) ,
u ( x h , y h ; x c , z c ) = exp ( i π λ z c ( x h x c ) 2 ) j = 1 N a j z c exp ( i π λ z c ( y h y j ) 2 ) .
u ( x h , y h ; z c ) = x c [ x 1 , x 2 ] u ( x h , y h ; x c , z c ) ,
u ( x h , y h ) = z c [ z 1 , z 2 ] u ( x h , y h ; z c ) ,
O ( ( H N B + W H ) N A ) = O ( H N + N A W H ) .
S v = λ 1 v ,
P θ = R π 2 θ { P z c } ,
u ( x h , y h ; z c ) = R θ ^ { u θ ( x h , y h ) } .
w x = 2 z tan ( sin 1 ( λ / 2 p x ) )
w y = 2 z tan ( sin 1 ( λ / 2 p y ) ) ,
Γ n + 1 = Γ n + δ n
δ n + 1 = δ n + δ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.