Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Superpixel-based sub-hologram method for real-time color three-dimensional holographic display with large size

Open Access Open Access

Abstract

One of the biggest challenges for large size three-dimensional (3D) holographic display based on the computer-generated hologram (CGH) is the trade-off between computation time and reconstruction quality, which has limited real-time synthesis of high-quality holographic image. In this paper, we propose a superpixel-based sub-hologram (SBS) method to reduce the computation time without sacrificing the quality of the reconstructed image. The superpixel-based sub-hologram method divides the target scene into a collection of superpixels. The superpixels are composed of adjacent object points. The region of the superpixel-based sub-hologram corresponding to each superpixel is determined by an approximation method. Since the size and the complexity of the diffraction regions are reduced, the hologram generation time is decreased significantly. The computation time has found to be reduced by 94.89% compared with the conventional sub-hologram method. It is shown that the proposed method implemented on the graphics processing unit (GPU) framework can achieve real-time (> 24 fps) color three-dimensional holographic display with a display size of 155.52 mm × 276.48 mm.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Holographic display based on the CGH has been considered to be one of the ultimate technologies for 3D display. It can provide human eyes with full parallax and depth information, thus providing photorealistic refreshable 3D images [14]. However, this CGH-based holographic display has been suffered from two critical issues [48]. One problem is its heavy computational complexity involved in the generation of CGH [68]. Another problem is that the pixel pitch and resolution of the spatial light modulator (SLM) limits the size of the reconstructed image [4,5].

So far, various algorithms have been proposed to improve the computational efficiency of holograms, such as look-up table-based (LUT) [915], layer-based [1618], wavefront recording plane (WRP) -based [1922], polygon-based [2325], field programmable gate array (FPGA) -based [26] and GPU-based [27] methods. But currently the methods mentioned above cannot achieve real-time color reconstruction with large scale and high resolution.

In order to obtain large-scale 3D reconstructed images, the liquid crystal display (LCD) panel has become a new type of holographic display device. The large pixel pitch of commercial LCDs results in insufficient sampling frequency, which aliases the spectral distribution of the sampled scene. This aliasing problem will introduce noise in the reconstructed image. The sub-hologram (SH) method [28] has been proposed to reduce the noise caused by the aliasing problem, but the computation time of the SH method needs to be further decreased to achieve real-time color 3D holographic display with large size. Another method suitable for LCD with large pixel pitch is the block holographic (BH) method [29], which reduces the computation complexity, but the image quality is affected by the aliasing problem. The hogel-based holographic stereogram is another method for generating large-scale 3D images [3033]. The shading and occlusion problem are automatically solved during the rendering process to make the 3D scene more realistic. But the computing cost of this method is extremely high. These methods mentioned above provide solutions for large-scale 3D holographic display. However, they face the trade-off between computation time and image quality, which limits the real-time generation of high-quality 3D reconstructed images with large size.

In this paper, to alleviate the trade-off between image quality and computation time, the superpixel-based sub-hologram (SBS) method is proposed, which has the ability to reduce computation time without affecting image quality and realizes real-time generation of CGH. The computation time has found to be reduced by 94.89% when it is compared with the SH method. In addition, it is confirmed that the proposed method implemented on the GPU framework can achieve real-time (> 24 fps) color 3D holographic display with a large display size of 155.52 mm × 276.48 mm.

In section 2, we introduce the principle of the proposed SBS method. In section 3, to verify the feasibility of the SBS method, we conduct numerical and optical experiments to compare the computation time and reconstruction quality of the SBS method with the SH and BH methods. In section 4, we summarize the innovative features and experimental results of our proposed method and give an outlook on the future development.

2. Proposed SBS method

As shown in Fig. 1(a), a 3D object is considered as the target object of 3D holographic display. When generating the hologram, the 3D object is decomposed into a collection of superpixels. The superpixels are represented by a collection of adjacent self-illuminating points. Each superpixel corresponds to a unique diffraction region on the CGH plane. The amplitude distribution in the diffraction region for each superpixel is regarded as the SBS. Figure 1(b) shows the operational diagram of the proposed method. We obtain the intensity map and depth map of the 3D object. The intensity map provides amplitude information for each object point, and the depth map provides spatial coordinate information for each object point. The intensity map is divided into RGB channels. The hologram generation process for each channel is composed of three steps, such as 1) Determination of the calculation region of the SBS corresponding to each superpixel, 2) Error compensation for SBSs, 3) Calculation and superposition of SBSs. The final reconstruction is a superposition of the three reconstructions produced by the holograms of RGB channels.

 figure: Fig. 1.

Fig. 1. (a) Overall illustration of the proposed SBS method. The computer-generated 3D object is divided into superpixels, which consist several adjacent object points. The calculation region of the SBS is determined by the diffraction angle. (b) Operational diagram of the three-step process of the proposed SBS method. 1) Determination of the calculation region of the SBS corresponding to each superpixel, 2) Error compensation for SBSs, 3) Calculation and superposition of SBSs. The final reconstruction is a superposition of the three reconstructions produced by the holograms of RGB channels.

Download Full Size | PDF

2.1 Point source method

In the point source method, a 3D object is considered to be composed of a large number of point sources. The complex amplitude of the point sources of a 3D object arriving at the hologram plane is given by

$$H(x^{\prime},y^{\prime}) = \sum\limits_{i = 1}^N {\frac{{{A_i}}}{{{r_i}}}\exp [j(\frac{{2\pi }}{\lambda }} {r_i} + {\phi _i})], $$
where N represents the number of the sampling points, ${A_i}$ and ${\phi _i}$ denote the amplitude and initial phase of object point i, respectively. ${r_i}$ denotes the distance between the i-th object point $({x_i},{y_i},{z_i})$ and the pixel $(x^{\prime},y^{\prime},0)$ on the hologram plane, which can be expressed as follows:
$${r_i} = \sqrt {{{(x^{\prime} - {x_i})}^2} + {{(y^{\prime} - {y_i})}^2} + z_i^2}. $$

2.2 Calculation region of the SBS

Figure 2 illustrates the determination of the SBS region. The blue and red points represent superpixels with different depths. Based on the diffraction theory, the calculation region of the SBS is determined by the diffraction region of the central object point ${O_{central}}({x_c},{y_c},{z_c})$ in the superpixel. Theoretically, the calculation area of the SBS is a circular area. The diffraction angle can be expressed as follows:

$$\theta = \arcsin \frac{\lambda }{{2p}}, $$
where $\lambda$ represents the wavelength, p denotes the pixel pitch of the CGH. The radius of the diffraction region is given by:
$$R = {z_c}\tan \theta \approx {z_c}\sin \theta = \frac{{\lambda {z_c}}}{{2p}}, $$
where ${z_c}$ denotes the distance between the CGH plane and ${O_{central}}$. To facilitate implantation of the circular region of the SBS in the computer programs, it is approximated as a circumscribed square region. The side length is decided by: $L = 2R$.

 figure: Fig. 2.

Fig. 2. Schematic of the SBS region. The calculation region of the SBS can be deduced from the diffraction angle of the corresponding central object point. The circular regions of the SBSs are approximated as circumscribed square regions to facilitate implantation of the SBS method.

Download Full Size | PDF

2.3 Error compensation

Theoretically, the diffraction region of the non-central object point does not coincide with the diffraction region of the central object point, as shown in Fig. 3(a). In the SBS method, the diffraction region of the superpixel is approximated as the diffraction region of the central object point, which results in missing information in the reconstruction.

 figure: Fig. 3.

Fig. 3. Error analysis and compensation of the SBS method. (a) Error analysis. The diffraction region of the superpixel is approximated as the diffraction region of the central object point. This approximation causes information loss in the SBS method. (b) Error compensation. By extending the region of the SBS to record complete information of the superpixel, the information loss can be compensated.

Download Full Size | PDF

An error compensation method is encoded to reduce this error, as shown in Fig. 3(b). The orange area is the diffraction region corresponding to the central object point A, which is the region of SBS. The blue areas are the diffraction regions of the non-central object points B and C. We extend the region of the SBS to record complete information of the superpixel. The extended length of the SBS can be expressed as follows:

$$\Delta L = \frac{{n - 1}}{2}\delta, $$
where n denotes the number of object points in the superpixel, $\delta$ denotes the distance between adjacent object points in the intensity map. The region of the SBS is extended to $L + 2\Delta L$ to fully cover the diffraction regions of all object points in the superpixel.

2.4 Wavefront distribution on the CGH plane

The SBS method is based on the point source method. In the SBS method, the target scene is decomposed into a collection of superpixels, and the wavefront distribution on the hologram plane for each superpixel after transmission can be described as:

$${H_k}(x^{\prime},y^{\prime}) = \sum\limits_{i = 1}^n {\frac{{{A_i}}}{{{r_i}}}\exp [j(\frac{{2\pi }}{\lambda }} {r_i} + {\phi _i})]rect(\frac{{x^{\prime} - {x_c}}}{{2R}})rect(\frac{{y^{\prime} - {x_c}}}{{2R}}), $$
where n represents the number of object points in the k-th superpixel, ${A_i}$ and ${\phi _i}$ denote the amplitude and initial phase of object point i in the superpixel, respectively.$rect$ represents the gate function. ${r_i}$ denotes the distance between the i-th object point $({x_i},{y_i},{z_i})$ and the pixel $(x^{\prime},y^{\prime},0)$ on the hologram plane, which is expressed in Eq. (2).

The complex amplitude distribution of the finally hologram is the superposition of the complex amplitude distribution of all SBSs, which can be described as:

$$H(x^{\prime},y^{\prime}) = \sum\limits_{k = 1}^S {{H_k}(x^{\prime},y^{\prime})}, $$
where S denotes the number of superpixels in the target scene.

Compared with the point source method, the SBS method reduces the computation time by limiting the diffraction area of the object points. And the reconstruction quality remains unaffected because of the encoded error compensation method.

To evaluate the reconstruction quality of our proposed method, we calculate the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the reconstructed images. The PSNR is usually used to evaluate the interference of background noise on the reconstructed image, which can be expressed as follows:

$$PSNR = 10 \times {\log _{10}}(\frac{{{{255}^2}}}{{MSE}}), $$
where $MSE$ denotes the mean square error of original and reconstructed images, which can be described as:
$$MSE = \frac{1}{{MN}}\sum\limits_{m = 1}^M {\sum\limits_{n = 1}^N {{{[{I_0}(m,n) - {I_r}(m,n)]}^2}} }, $$
where M and N are the numbers of rows and columns of the original image, respectively. ${I_0}(m,n)$ and ${I_r}(m,n)$ denote the pixel values of the pixel $(m,n)$ of the original image and the reconstructed image, respectively. The SSIM is an index to evaluate the similarity between the original image and the reconstructed image, which can be expressed as follows:
$$SSIM = \frac{{(2{\mu _0}{\mu _r} + {C_1})(2{\sigma _{or}} + {C_2})}}{{({\mu _0}^2 + {\mu _r}^2 + {C_1})({\sigma _o}^2 + {\sigma _r}^2 + {C_2})}}, $$
where ${\mu _0}$ and ${\mu _r}$ are the average pixel values of the original image and the reconstructed image, respectively. ${\sigma _o}^2$ and ${\sigma _r}^2$ are the variances of the pixel values of the original image and the reconstructed image, respectively. ${\sigma _{or}}$ is the covariance of the original image and the reconstructed image.

3. Numerical simulations and optical experiments

3.1 Numerical simulations

In order to verify the feasibility of our proposed method, we carry out numerical simulations to compare our proposed method with BH and SH methods. All the simulations were performed in an i9-11900K, 3.5 GHz central processing unit (CPU) with 64 GB RAM. These simulations are conducted according to the physical propagation process of the optical experiment. We obtain the intensity map and the depth map of the 3D object. The intensity map is divided into superpixels. The parameters of the numerical simulations are shown in Table 1.

Tables Icon

Table 1. Parameters of our numerical simulations

As shown in Fig. 4, to demonstrate the reconstruction quality of our proposed SBS method, we compare the reconstructed images of the BH, SH and SBS methods. The resolution board image and color prism image are used to generate the holograms. We use the PSNR and SSIM to evaluate the quality of reconstructed images. It is demonstrated that both SH and SBS methods are correct and feasible for the CGH generation, but the SH method takes much more time. The reconstructed image of BH method suffers from aliasing problem, resulting in the lowest PSNR and SSIM.

 figure: Fig. 4.

Fig. 4. Numerical simulation results.

Download Full Size | PDF

3.2 Optical experiments

To verify the consistency of the numerical simulations and the practical display results, we conduct optical experiments to reconstruct the original images with the proposed method. The experimental setup is illustrated in Fig. 5(a). The color laser integrating red light (638 nm), green light (520 nm) and blue light (450 nm) is transformed into a diverging spherical wave through a spatial filter (SF). The lenses L1 and L2 adjust the size of the spherical wave. Lens L3 converts the spherical waves into a plane wave. The hologram generated by the SBS method is loaded onto a commercial amplitude LCD with the pixel pitch of 72 µm and the resolution of 2160 × 3840. The commercial LCD can achieve independent modulation of red, green and blue pixels, which is suitable for color holography display. The reconstruction distance is 1 m. An eye-tracking camera is used in conjunction with the eye-tracking algorithm to achieve real-time tracking of the observer's position, which is discussed in Analysis.

 figure: Fig. 5.

Fig. 5. Optical experiment results. (a) Experimental setup. (b, c) Optical experimental reconstructed 3D images focused at 900 mm and 1000 mm, respectively. (d, e) Ground truth. (f)-(k) Optical reconstructed images of the BH, SH and SBS methods.

Download Full Size | PDF

Figure 5(b) and Fig. 5(c) demonstrate the 3D display capability of the SBS method. The 3D scene is composed of two letter models located at different distances to the hologram plane. The CGH is calculated for the letter ‘F’ and ‘N’ focused on the distance z1 = 900 mm and z2 = 1000 mm, respectively. The two letters are focused and blurred the same way as the real objects. When the camera focus on the distance of z1, the letter ‘F’ is clear and in good shape. As the focus position moves to the distance of z2, the letter ‘N’ becomes clear. The experiment shows that the proposed SBS method has the ability to display 3D objects correctly.

To verify the reconstruction quality of the proposed method in the practical display, we compare the reconstructed images generated with the SBS, BH and SH methods, as shown in Fig. 5(d)–5(k). The resolution board image and the color prism image are used to calculate CGH, as shown in Fig. 5(d) and 5(e). Figure 5(f)–5(k) shows the comparison of reconstruction. The interferences among object points cause speckle noise in the optical reconstructed images. Because of the insufficient sampling frequency, the reconstructed images of the BH method suffers from aliasing problem. The SH method reconstructs the original image correctly, but the image quality is affected by the speckle noise. The encoded error compensation of the proposed SBS method reduces the information loss of the reconstructed images, thus providing high reconstruction quality.

3.3 Analysis

To further illustrate the advantages of our proposed SBS method, the SBS method is compared with SH and BH methods in terms of image quality and computation time in the computational simulation, under the condition of different numbers of object points. All methods have been tested for the numbers of object points ranged from 3.5 K to 230 K. As shown in Fig. 6(a)-(c), the SH method can reconstruct high-quality images, but the computation time is the longest. Furthermore, the computation time of the SH method increases significantly with the increase of the number of object points. When the number of object points reaches 230 K, the computation time of the SH method and the SBS method is 51.89 s and 2.68 s, respectively. The computation time of the SBS method is 94.89% faster than the SH method. The computation time of the BH method is shorter than the SH method, but reconstruction quality of the BH method are greatly affected by aliasing phenomenon, resulting in low PSNR and SSIM. The SBS method alleviates the trade-off between computation time and reconstruction quality, providing both short computation time and high reconstruction quality. Moreover, the gap differences in computation speed between the proposed SBS method and the SH method have been becoming wider as the number of calculated object points increases.

 figure: Fig. 6.

Fig. 6. Comparison of reconstruction quality and computation time. (a, b) Reconstruction quality. PSNR and SSIM values of all methods are averaged over 20 test images. Our proposed SBS method as well as the SH method achieves high reconstruction quality. Reconstruction quality of the BH method is affected by aliasing. (c) Comparison of computation time on the CPU framework. When compared with the SH method, the speed advantage of the proposed SBS method is significant. (d) Computation time of the proposed SBS method on the CPU and GPU frameworks. The SBS method implemented on the GPU achieves real-time (> 24 fps) generation of holograms.

Download Full Size | PDF

In order to explore the full potential of the proposed SBS method in terms of computation speed, it has been implemented with the GPU framework under the computer-unified-device-architecture (CUDA) platform to confirm its accelerated hologram calculation capability and its potential of real-time generation of CGH. The GPU programs are implemented under the CUDA platform running in an NVIDIA RTX 3090 GPU with 24 GB. Figure 6(d) illustrates the comparison of computation time of the SBS method under CPU and GPU frameworks. Both methods have been tested for the numbers of object points ranged from 3.5 K to 230 K. Computation time on the GPU framework has been calculated to be much less than that on the CPU framework, which confirms that the computation capacity of the GPU framework is more powerful than the CPU. For instance, the computation time on the CPU and GPU frameworks for 14 K object points are 0.79 s and 0.01 s, respectively. As shown in Fig. 6(d), the SBS method implemented on the GPU framework has the capability of real-time generation of CGH.

The viewing area of the commercial LCD system is limited due to interference from high diffraction orders. In order to expand the field of view of holographic 3D display, the next step is to implement real-time eye tracking. The camera in the optical experiment setup captures the picture of the observer and sends the digital image to the computer. An eye tracking computer program is developed to calculate the position of the pupil of the human eye in real-time. In our future work, the display system will have the ability to dynamically change the position of the viewing window through an additional hologram encoding method and optical system adjustment.

4. Conclusion

In conclusion, the superpixel-based sub-hologram method is proposed to alleviate the trade-off between computation time and reconstruction quality. The SBS method calculates the wavefront distribution of each SBS and compensates the information loss. The final hologram is the superposition of all SBSs. The SBS method reduces the computation time by dividing the 3D object into superpixels and limiting the diffraction region of the object points. To confirm the feasibility of the proposed method, we conduct numerical simulations and optical experiments. The generated CGH is loaded onto a commercial liquid crystal display (LCD) system. The computation time has found to be reduced by 94.89% when it is compared with the SH method. When implemented on the GPU framework, the proposed method can achieve real-time (> 24 fps) color 3D holographic display with a display size of 155.52 mm × 276.48 mm. With the upcoming development of our eye-tracking technology, the proposed SBS method implemented on the commercial LCD system will achieve real-time color 3D holographic display with large size and expended field of view. This work may pave a new avenue for real-time 3D holographic display, which could lead to applications in near-eye holographic display devices and next-generation display devices such as holographic televisions, telepresence system and 3D display desktop computers.

Funding

Beijing Municipal Science & Technology Commission, Administrative Commission of Zhongguancun Science Park (Z211100004821012); National Natural Science Foundation of China (61975014, 62035003).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

2. C. J. Kuo and M. -H Tsai, Three-Dimensional Holographic Imaging (John Wiley & Sons, 2002).

3. T. C. Poon, Digital Holography and Three-dimensional Display (Springer Verlag, 2007).

4. H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi, and T. Senoh, “Large size three-dimensional video by electronic holography using multiple spatial light modulators,” Sci. Rep. 4(1), 6177 (2015). [CrossRef]  

5. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). [CrossRef]  

6. A. E. Shortt, T. J. Naughton, and B. Javidi, “Histogram approaches for lossy compression of digital holograms of three-dimensional objects,” IEEE Trans. on Image Process. 16(6), 1548–1556 (2007). [CrossRef]  

7. T. Nishitsuji, T. T. Shimobaba, T. Kakue, N. Masuda, and T. Ito, “Review of fast calculation techniques for computer-generated holograms with the point-light-source-based model,” IEEE Trans. Ind. Inf. 13(5), 2447–2454 (2017). [CrossRef]  

8. E. Sahin, E. Stoykova, J. Mäkinen, and A. Gotchev, “Computer-generated holograms for 3D imaging: A survey,” ACM Comput. Surv. 53(2), 1–2 (2020). [CrossRef]  

9. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

10. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008). [CrossRef]  

11. Y. Pan, X. Xu, S. Solanki, X. Liang, R. Tanjung, C. Tan, and T. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17(21), 18543–18555 (2009). [CrossRef]  

12. J. Jia, Y. T. Wang, X. Li, Y. J. Pan, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013). [CrossRef]  

13. C. Gao, J. Liu, X. Li, G. L. Xue, J. Jia, and Y. T. Wang, “Accurate compressed look up table method for CGH in 3D holographic display,” Opt. Express 23(26), 33194–33204 (2015). [CrossRef]  

14. D. Pi, J. Liu, R. Kang, Z. Zhang, and Y. Han, “Reducing the memory usage of computer-generated hologram calculation using accurate high-compressed look-up-table method in color 3D holographic display,” Opt. Express 27(20), 28410–28422 (2019). [CrossRef]  

15. D. Pi, J. Liu, Y. Han, A. Khalid, and S. Yu, “Simple and effective calculation method for computer-generated hologram based on non-uniform sampling using look-up-table,” Opt. Express 27(26), 37337–37348 (2019). [CrossRef]  

16. Y. Zhao, K. Kwon, M. Erdenebat, M. Islam, S. Jeon, and N. Kim, “Quality enhancement and GPU acceleration for a full-color holographic system using a relocated point cloud gridding method,” Appl. Opt. 57(15), 4253–4262 (2018). [CrossRef]  

17. Y. Zhao, L. C. Cao, H. Zhang, D. Kong, and G. F. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

18. H. Zhang, L. C. Cao, and G. F. Jin, “Three-dimensional computer-generated hologram with Fourier domain segmentation,” Opt. Express 27(8), 11689–11697 (2019). [CrossRef]  

19. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009). [CrossRef]  

20. P. W. M. Tsang and T. C. Poon, “Fast generation of digital holograms based on warping of the wavefront recording plane,” Opt. Express 23(6), 7667–7673 (2015). [CrossRef]  

21. D. Pi, J. Liu, Y. Han, S. Yu, and N. Xiang, “Acceleration of computer-generated hologram using wavefront-recording plane and look-up table in three-dimensional holographic display,” Opt. Express 28(7), 9833–9841 (2020). [CrossRef]  

22. Y. L. Li, D. Wang, N. N. Li, and Q. H. Wang, “Fast hologram generation method based on the optimal segmentation of a sub-CGH,” Opt. Express 28(21), 32185–32198 (2020). [CrossRef]  

23. H. Kim, J. k. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]  

24. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). [CrossRef]  

25. Y.-M. Ji, H. Yeom, and J. H. Park, “Efficient texture mapping by adaptive mesh division in mesh-based computer generated hologram,” Opt. Express 24(24), 28154–28169 (2016). [CrossRef]  

26. T. Sugie, T. Akamatsu, T. Nishitsuji, R. Hirayama, N. Masuda, H. Nakayama, Y. Ichihashi, A. Shiraki, M. Oikawa, N. Takada, and Y. Endo, “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018). [CrossRef]  

27. M. W. Kwon, S. C. Kim, S. E. Yoon, Y. S. Ho, and E. S. Kim, “Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes,” Opt. Express 23(3), 2101–2120 (2015). [CrossRef]  

28. N. Leister, A. Schwerdtner, G. Fütterer, S. Buschbeck, and S. Flon, “Full-color interactive holographic projection system for large 3D scene reconstruction,” Proc. SPIE 6911, 69110V (2008). [CrossRef]  

29. P. He, J. Liu, T. Zhao, Y. Han, and Y. Wang, “Compact large size colour 3D dynamic holographic display using liquid crystal display panel,” Opt. Commun. 432, 54–58 (2019). [CrossRef]  

30. J. Liu and S. Lu, “Fast calculation of high-definition depth-added computer-generated holographic stereogram by spectrum-domain look-up table,” Appl. Opt. 60(4), A104–A110 (2021). [CrossRef]  

31. A. Khuderchuluun, E. Dashdavaa, Y. Lim, J. Jeong, and N. Kim, “Full-parallax holographic stereogram printer for computer-generated volume hologram,” Digital Holography and Three-Dimensional Imaging 2019, Th3A.17 (2019).

32. X. Cao, M. Guan, L. Xia, X. Sang, and Z. Chen, “High efficient generation of holographic stereograms based on wavefront recording plane,” Chin. Opt. Lett. 15(12), 120901 (2017). [CrossRef]  

33. X. Yan, C. Wang, Y. Liu, X. Wang, X. Liu, T. Jing, S. Chen, P. Li, and X. Jiang, “Implementation of the real–virtual 3D scene-fused full-parallax holographic stereogram,” Opt. Express 29(16), 25979–26003 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Overall illustration of the proposed SBS method. The computer-generated 3D object is divided into superpixels, which consist several adjacent object points. The calculation region of the SBS is determined by the diffraction angle. (b) Operational diagram of the three-step process of the proposed SBS method. 1) Determination of the calculation region of the SBS corresponding to each superpixel, 2) Error compensation for SBSs, 3) Calculation and superposition of SBSs. The final reconstruction is a superposition of the three reconstructions produced by the holograms of RGB channels.
Fig. 2.
Fig. 2. Schematic of the SBS region. The calculation region of the SBS can be deduced from the diffraction angle of the corresponding central object point. The circular regions of the SBSs are approximated as circumscribed square regions to facilitate implantation of the SBS method.
Fig. 3.
Fig. 3. Error analysis and compensation of the SBS method. (a) Error analysis. The diffraction region of the superpixel is approximated as the diffraction region of the central object point. This approximation causes information loss in the SBS method. (b) Error compensation. By extending the region of the SBS to record complete information of the superpixel, the information loss can be compensated.
Fig. 4.
Fig. 4. Numerical simulation results.
Fig. 5.
Fig. 5. Optical experiment results. (a) Experimental setup. (b, c) Optical experimental reconstructed 3D images focused at 900 mm and 1000 mm, respectively. (d, e) Ground truth. (f)-(k) Optical reconstructed images of the BH, SH and SBS methods.
Fig. 6.
Fig. 6. Comparison of reconstruction quality and computation time. (a, b) Reconstruction quality. PSNR and SSIM values of all methods are averaged over 20 test images. Our proposed SBS method as well as the SH method achieves high reconstruction quality. Reconstruction quality of the BH method is affected by aliasing. (c) Comparison of computation time on the CPU framework. When compared with the SH method, the speed advantage of the proposed SBS method is significant. (d) Computation time of the proposed SBS method on the CPU and GPU frameworks. The SBS method implemented on the GPU achieves real-time (> 24 fps) generation of holograms.

Tables (1)

Tables Icon

Table 1. Parameters of our numerical simulations

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

H ( x , y ) = i = 1 N A i r i exp [ j ( 2 π λ r i + ϕ i ) ] ,
r i = ( x x i ) 2 + ( y y i ) 2 + z i 2 .
θ = arcsin λ 2 p ,
R = z c tan θ z c sin θ = λ z c 2 p ,
Δ L = n 1 2 δ ,
H k ( x , y ) = i = 1 n A i r i exp [ j ( 2 π λ r i + ϕ i ) ] r e c t ( x x c 2 R ) r e c t ( y x c 2 R ) ,
H ( x , y ) = k = 1 S H k ( x , y ) ,
P S N R = 10 × log 10 ( 255 2 M S E ) ,
M S E = 1 M N m = 1 M n = 1 N [ I 0 ( m , n ) I r ( m , n ) ] 2 ,
S S I M = ( 2 μ 0 μ r + C 1 ) ( 2 σ o r + C 2 ) ( μ 0 2 + μ r 2 + C 1 ) ( σ o 2 + σ r 2 + C 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.