Abstract
We propose a holographic projection system that achieves high image quality, brightness, and light efficiency. Using a novel, to the best of our knowledge, light-efficiency loss function, we are able to concentrate more light on the projection region and improve display brightness compared with conventional projectors. Leveraging emerging artificial intelligence-driven computer-generated holography and camera-in-the-loop calibration techniques, we learn a holographic wave propagation model using experimentally captured holographic images and demonstrate state-of-the-art light reallocation performance with high image quality.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
Projection systems should simultaneously realize high image quality, brightness, and illumination utilization. High brightness is essential for creating a comfortable viewing experience under challenging scenarios such as projection under strong ambient light. High illumination utilization reduces power usage and thermal engineering efforts, allowing for more compact and energy-efficient projector architectures.
Such improvements, however, are difficult to realize using current approaches, which mostly fall under two categories: (i) improving the light engine in traditional displays [1,2] or (ii) a dual-display configuration [3,4]. Although the output power and power efficiency of light engines have improved significantly throughout the years, the light utilization of traditional displays, i.e., category (i), is fundamentally limited by amplitude modulation. This process selectively attenuates light using a transmissive or reflective amplitude spatial light modulator (SLM) to generate a desired intensity pattern. This inability to utilize the full output power of the illumination engine leads to excessive power consumption, thermal engineering challenges, and thus low light efficiency and brightness. On the other hand, dual-display configurations, i.e., category (ii), consist of two SLMs, where one phase SLM is used for reallocating light to different parts of the illuminated region and another amplitude SLM is used for displaying images. Although they are able to achieve high brightness, dual-display architectures are often undesirable due to increased power consumption and cost, bulkier size, and synchronization issues.
Computer-generated holography (CGH), on the other hand, has great potential to revolutionize high-brightness projection systems enabled by its “light reallocation" mechanism where light can in theory be redirected anywhere within the sub-hologram of a point source (refer to Supplement 1 for more details on the effect of the sub-hologram cone size and light efficiency). This property was experimentally demonstrated in Ref. [5] where digital holography is used to perform 3D sensing. However, the aforementioned system as well as prior works on holographic projection systems [6–9] cannot display high-image-quality holograms with the absence of speckle noise and ripple artifacts. Furthermore, they did not aim to improve the light efficiency of the holographic displays by improving the CGH optimization process. Recently, emerging artificial intelligence (AI)-driven CGH algorithms [10–13] and camera-in-the-loop (CITL) calibration techniques have achieved state-of-the-art holographic display image quality based on Fresnel holography, yet it is unclear if these frameworks translate well to projectors and they also do not focus on increasing the brightness of the projected images.
Motivated by the aforementioned challenges, we propose a Fresnel holography-based projection system that fully leverages the desirable “light reallocation” property of holography to achieve high-brightness projection. We design a light-efficiency-promoting loss function and employ a long propagation distance between phase SLM and the image plane, which improves image brightness and light usage by around 30% while achieving a high image quality. Furthermore, we leverage emerging AI-driven CITL calibration and model training techniques in our projector setup and demonstrate state-of-the-art holographic image quality. Although there is prior work on holographic projection [6–9], their image quality is often unacceptable for a comfortable viewing experience due to the mismatch between physical and simulation setups and their CGH algorithms [14,15] are not optimized to utilize the full output power of the laser illumination. Our work is the first to leverage AI-driven CGH techniques to drastically improve image quality and increase the brightness of projected holographic images.
Figure 1 shows our holographic projection system, where the images formed on the hologram image plane (yellow box) via Fresnel holography are projected onto a screen via a projection lens. In Fresnel holography, a collimated coherent laser beam illuminates a SLM with a source field $u_\text {src}$. The SLM imparts a phase delay $\phi$ on $u_\text {src}$, resulting in a modulated field $u_\text {SLM}$, which is then propagated by a distance $z$ to form a field $u_z$, and finally imaged onto the projection screen by a magnifying projection lens. The propagated field $u_z$ generates an intensity pattern $|u_z|^2$ on the projection screen due to interference at the hologram plane. The image formation described above is governed by the following equations:
Similar to Choi et al. [11], we capture a train and a test set comprised of a large number of SLM phase patterns and corresponding amplitude images recorded at a fixed distance. Using a standard stochastic gradient descent (SGD) solver, we fit the model parameters on the training image pairs to learn the calibrated holographic projection model.
After the holographic projection model is learned, we can then optimize for the phase pattern $\phi$ displayed on the SLM that generates a desired target image amplitude $a_{\text {target}}$ by minimizing $\mathcal {L}_{\text {recon}}(s\cdot |f_\text {model}(e^{i\phi }, z) |, a_\text {target})$, where $s$ is a scale factor that is optimized along with $\phi$ and $\mathcal {L}$ is an arbitrary loss function between the reconstructed image and the target image. We use a SGD procedure to optimize for $\phi$, and the update rules of our SGD-based solver in iteration $k$ with learning rate $\alpha$ are
The total loss of the phase optimization procedure is therefore
where $\gamma$ determines the weighting between image reconstruction and light-efficiency loss function. Details of the optimization procedure and choosing the appropriate regularization weight are described more in Supplement 1.Our holographic projector uses a TI DLP6750Q1EVM SLM with a resolution of $1280 \times 800$, a pixel pitch of 10.8 $\mu$m, and a bit depth of 4 bits per pixel. The laser is an FISBA RGBeam fiber-coupled module with three optically aligned laser diodes with a maximum output power of 50 mW and RGB wavelengths of 636.4, 517.7, and 440.8 nm, respectively. A Nikon AF Nikkor 50mm f/1.4D lens is used to magnify the holographic image and project it on a screen 87 cm away from the lens. With this configuration, our projection system provides a large viewing region of red 12 cm $\times$ 21 cm. The SLM phases are optimized separately for each color channel and the captured monochrome images are combined in post-processing to create a full-color image. All holograms are eight-frames time-multiplexed, i.e., eight phase patterns are displayed sequentially to generate a single amplitude image. All images are captured with an FLIR Grasshopper 2.3 MP color USB3 vision sensor. Additional hardware and software implementation details are discussed in Supplement 1.
Probably the most desirable feature of holographic projectors is their ability to redirect illumination power to arbitrary regions within a sub-hologram cone. This property enables high-brightness projection through light reallocation by steering light away from dim image regions to bright image regions without blocking any light. Thus, holography can potentially outperform traditional light-attenuating transmissive displays or self-emissive displays in output illumination power utilization and brightness. This is true especially when displaying extremely sparse content where most pixel values are zero, since most light would then be blocked in light-attenuating displays and self-emissive pixels cannot output their maximum power. This can be seen in Fig. 3—image brightness drastically increases as sparsity increases for holographic projectors but stays constant in traditional displays. Our proposed loss function further improves the brightness at every sparsity level, highlighting our method’s robustness to sparsity. Here, we define sparsity as the ratio of zero-valued pixels to the total number of pixels in an image. These results suggest that holographic projection could be extremely useful in applications such as automotive head-up display (HUD) devices, where most of the displayed content is sparse text and pictograms.
To fully unlock the light reallocation capability of holography, the sub-hologram cone of each SLM pixel should ideally fully cover the projection region such that light can be redirected anywhere within. In other words, the holographic image should be formed very far away from the SLM, which means that the propagation distance $z$ should be long ($z \geq 0.6 \; d^\ast$). More discussion regarding the normalized distance unit $d^\ast$ can be found in Supplement 1. This is illustrated in Fig. 4(c), where we show that a long propagation distance is crucial for a holographic display to perform light reallocation. However, we observe from Fig. 4(a) that a long propagation distance results in severe fringe artifacts in the captured SGD-optimized images. This is the reason why most prior works employ a near-SLM setting that performs little to no light reallocation [13,17]. In contrast, the learned wave propagation model in our work fully enables a long propagation distance, as shown in Fig. 4(b) where all artifacts are eliminated. Coupled with our new loss function, we are able to optimize light efficiency and image brightness while maintaining great image quality.
From Figs. 2 and 5, we observe that the light-efficiency loss function successfully constrains light within the projection region and eliminates all stray light outside of the region, effectively improving image brightness by around 30% for all test cases. Furthermore, the pixel-wise and perceptual image reconstruction performance is also maintained when images are optimized with the loss function, showing that a significant amount of light efficiency can be achieved without a large drop in image quality.
To summarize, we demonstrate that our holographic projection system is capable of displaying images with unprecedented image quality, brightness, and light efficiency. Nevertheless, our method is not without limitations. It takes about 20 minutes to optimize a phase pattern on a single NVIDIA RTX A6000. Although it is possible to train an inverse model that directly maps input images to optimized phases [12,13], we have not attempted to do this in this Letter and we leave it for future work. Currently, the zeroth-order undiffracted light in our system adversely affects the dynamic range of the projected content and reduces image contrast. However, we believe that this challenge can eventually be overcome by using higher-efficiency SLMs or modifying the system to an off-axis setup where the images and the zeroth-order DC term could be separated by applying a phase ramp to the SLM [18,19]. Finally, we have not extended this method to video content, but in theory it can be easily implemented by dynamically adjusting the laser power to account for the different scale factor $s$ values across frames.
The powerful light reallocation mechanism of holography together with AI-driven, fully automatic display calibration techniques has great potential to revolutionize high-brightness projection systems. In this work, we leverage state-of-the-art camera-calibrated wave propagation model algorithms and a novel loss function design to achieve superior image quality, brightness, and light efficiency in a holographic projection system.
Acknowledgments
B.C. is supported by the Stanford Graduate Fellowship (SGF) and the NSF Graduate Research Fellowship. M.G. is supported by the SGF. S.C. is supported by the Meta Research Ph.D. Fellowship and the Kwanjeong Scholarship. We thank Sony and Samsung for helpful discussions and support.
Disclosures
The authors declare no conflicts of interest.
Data availability
All data needed to evaluate the conclusions in the Letter are present in the Letter and/or Supplement 1 and Visualization 1.
Supplemental document
See Supplement 1 for supporting content.
REFERENCES
1. P. Antonis, D. de Boer, R. Koole, S. Kadijk, Y. Li, V. Vanbroekhoven, P. Van De Voorde, and C. G. A. Hoelen, in Sixteenth International Conference on Solid State Lighting and LED-Based Illumination Systems, N. Dietz and I. T. Ferguson, eds. (SPIE, 2017), p. 23.
2. J. Xiong, E.-L. Hsiang, Z. He, T. Zhan, and S.-T. Wu, Light: Sci. Appl. 10, 216 (2021). [CrossRef]
3. G. Damberg, J. Gregson, and W. Heidrich, ACM Trans. Graph. 35, 24 (2016). [CrossRef]
4. R. Hoskinson, B. Stoeber, W. Heidrich, and S. Fels, ACM Trans. Graph. 29, 165 (2010). [CrossRef]
5. D. Chan, S. G. Narasimhan, and M. O’Toole, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2022), pp. 17865–17874.
6. M. Makowski, I. Ducin, M. Sypek, A. Siemion, A. Siemion, J. Suszek, and A. Kolodziejczyk, Opt. Lett. 35, 1227 (2010). [CrossRef]
7. M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, Opt. Express 20, 25130 (2012). [CrossRef]
8. E. Buckley, J. Disp. Technol. 7, 135 (2011). [CrossRef]
9. T. Shimobaba, T. Kakue, and T. Ito, in 2015 IEEE 13th International Conference on Industrial Informatics (INDIN) (IEEE, 2015), pp. 732–741.
10. S. Choi, M. Gopakumar, Y. Peng, J. Kim, and G. Wetzstein, ACM Trans. Graph. 40, 240 (2021). [CrossRef]
11. S. Choi, M. Gopakumar, Y. Peng, J. Kim, M. O’Toole, and G. Wetzstein, in Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings (ACM, 2022), pp. 1–9.
12. Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020). [CrossRef]
13. L. Shi, B. Li, C. Kim, P. Kellnhofer, and W. Matusik, Nature 591, 234 (2021). [CrossRef]
14. R. W. Gerchberg and W. O. Saxton, Optik 35, 237 (1972).
15. E. Buckley, A. Cable, N. Lawrence, and T. Wilkinson, Appl. Opt. 45, 7334 (2006). [CrossRef]
16. J. W. Goodman, Introduction to Fourier Optics, 4th ed. (W.H. Freeman, Macmillan Learning, 2017).
17. A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017). [CrossRef]
18. C. Chen, D. Kim, D. Yoo, B. Lee, and B. Lee, Opt. Lett. 47, 790 (2022). [CrossRef]
19. H. Zhang, J. Xie, J. Liu, and Y. Wang, Appl. Opt. 48, 5834 (2009). [CrossRef]