Abstract
Holographic display is considered as a promising three-dimensional (3D) display technology and has been widely studied. However, to date, the real-time holographic display for real scenes is still far from being incorporated in our life. The speed and quality of information extraction and holographic computing need to be further improved. In this paper, we propose an end-to-end real-time holographic display based on real-time capture of real scenes, where the parallax images are collected from the scene and a convolutional neural network (CNN) builds the mapping from the parallax images to the hologram. Parallax images are acquired in real time by a binocular camera, and contain depth information and amplitude information needed for 3D hologram calculation. The CNN, which can transform parallax images into 3D holograms, is trained by datasets consisting of parallax images and high-quality 3D holograms. The static colorful reconstruction and speckle-free real-time holographic display based on real-time capture of real scenes have been verified by the optical experiments. With simple system composition and affordable hardware requirements, the proposed technique will break the dilemma of the existing real-scene holographic display, and open up a new direction for the application of real-scene holographic 3D display such as holographic live video and solving vergence-accommodation conflict (VAC) problems for head-mounted display devices.
© 2023 Optica Publishing Group
Full Article | PDF ArticleMore Like This
Chenliang Chang, Bo Dai, Dongchen Zhu, Jiamao Li, Jun Xia, Dawei Zhang, Lianping Hou, and Songlin Zhuang
Opt. Lett. 48(4) 851-854 (2023)
Xinxing Xia, Furong Yang, Weisen Wang, Xinghua Shui, Frank Guan, Huadong Zheng, Yingjie Yu, and Yifan Peng
Opt. Lett. 48(6) 1478-1481 (2023)
Zhenxing Dong, Chao Xu, Yuye Ling, Yan Li, and Yikai Su
Opt. Lett. 48(3) 759-762 (2023)