Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-brightness holographic projection

Open Access Open Access

Abstract

We propose a holographic projection system that achieves high image quality, brightness, and light efficiency. Using a novel, to the best of our knowledge, light-efficiency loss function, we are able to concentrate more light on the projection region and improve display brightness compared with conventional projectors. Leveraging emerging artificial intelligence-driven computer-generated holography and camera-in-the-loop calibration techniques, we learn a holographic wave propagation model using experimentally captured holographic images and demonstrate state-of-the-art light reallocation performance with high image quality.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Projection systems should simultaneously realize high image quality, brightness, and illumination utilization. High brightness is essential for creating a comfortable viewing experience under challenging scenarios such as projection under strong ambient light. High illumination utilization reduces power usage and thermal engineering efforts, allowing for more compact and energy-efficient projector architectures.

Such improvements, however, are difficult to realize using current approaches, which mostly fall under two categories: (i) improving the light engine in traditional displays [1,2] or (ii) a dual-display configuration [3,4]. Although the output power and power efficiency of light engines have improved significantly throughout the years, the light utilization of traditional displays, i.e., category (i), is fundamentally limited by amplitude modulation. This process selectively attenuates light using a transmissive or reflective amplitude spatial light modulator (SLM) to generate a desired intensity pattern. This inability to utilize the full output power of the illumination engine leads to excessive power consumption, thermal engineering challenges, and thus low light efficiency and brightness. On the other hand, dual-display configurations, i.e., category (ii), consist of two SLMs, where one phase SLM is used for reallocating light to different parts of the illuminated region and another amplitude SLM is used for displaying images. Although they are able to achieve high brightness, dual-display architectures are often undesirable due to increased power consumption and cost, bulkier size, and synchronization issues.

Computer-generated holography (CGH), on the other hand, has great potential to revolutionize high-brightness projection systems enabled by its “light reallocation" mechanism where light can in theory be redirected anywhere within the sub-hologram of a point source (refer to Supplement 1 for more details on the effect of the sub-hologram cone size and light efficiency). This property was experimentally demonstrated in Ref. [5] where digital holography is used to perform 3D sensing. However, the aforementioned system as well as prior works on holographic projection systems [69] cannot display high-image-quality holograms with the absence of speckle noise and ripple artifacts. Furthermore, they did not aim to improve the light efficiency of the holographic displays by improving the CGH optimization process. Recently, emerging artificial intelligence (AI)-driven CGH algorithms [1013] and camera-in-the-loop (CITL) calibration techniques have achieved state-of-the-art holographic display image quality based on Fresnel holography, yet it is unclear if these frameworks translate well to projectors and they also do not focus on increasing the brightness of the projected images.

Motivated by the aforementioned challenges, we propose a Fresnel holography-based projection system that fully leverages the desirable “light reallocation” property of holography to achieve high-brightness projection. We design a light-efficiency-promoting loss function and employ a long propagation distance between phase SLM and the image plane, which improves image brightness and light usage by around 30% while achieving a high image quality. Furthermore, we leverage emerging AI-driven CITL calibration and model training techniques in our projector setup and demonstrate state-of-the-art holographic image quality. Although there is prior work on holographic projection [69], their image quality is often unacceptable for a comfortable viewing experience due to the mismatch between physical and simulation setups and their CGH algorithms [14,15] are not optimized to utilize the full output power of the laser illumination. Our work is the first to leverage AI-driven CGH techniques to drastically improve image quality and increase the brightness of projected holographic images.

Figure 1 shows our holographic projection system, where the images formed on the hologram image plane (yellow box) via Fresnel holography are projected onto a screen via a projection lens. In Fresnel holography, a collimated coherent laser beam illuminates a SLM with a source field $u_\text {src}$. The SLM imparts a phase delay $\phi$ on $u_\text {src}$, resulting in a modulated field $u_\text {SLM}$, which is then propagated by a distance $z$ to form a field $u_z$, and finally imaged onto the projection screen by a magnifying projection lens. The propagated field $u_z$ generates an intensity pattern $|u_z|^2$ on the projection screen due to interference at the hologram plane. The image formation described above is governed by the following equations:

$$\begin{aligned} u_z(x, y, \lambda) & = f(u_{\text{SLM}}(x, y, \lambda), z),\\ u_{\text{SLM}}(x, y, \lambda) & = e^{i \phi(x, y, \lambda)} u_{\text{src}}(x, y, \lambda), \end{aligned}$$
where $\lambda$ is the wavelength of the laser, $x, y$ are the spatial coordinates, and $f$ is the composition of wave propagation from the SLM plane to the hologram plane and the projection onto the screen. The wave propagation component in $f$ can be either the angular spectrum method (ASM) [16] or a camera-calibrated learned propagation wave model, and we denote the corresponding $f$ by $f_\text {ASM}$ and $f_\text {model}$, respectively. The latter is described in detail in recent works [1012] on AI-driven holographic near-eye displays. We employ this learned wave propagation framework in our high-brightness projection system. Here, we use the same model design as that proposed by Choi et al. [11]. Details of the model architecture are described more in Supplement 1.

 figure: Fig. 1.

Fig. 1. Holographic projector setup. The holographic image formed on the hologram plane (yellow square) is imaged onto the projection screen.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Proposed holographic projection system that achieves high brightness, light efficiency, and image quality. By optimizing holograms using a camera-calibrated wave propagation model coupled with a new light-efficiency loss function, we are able to constrain illumination power to the projection region (center region inside the thick dark border) and thus increase overall brightness, all the while maintaining a high image quality. (a) Image optimized without the loss function. (b) Image optimized with the loss function. The close-ups demonstrate how light wasted outside the image region (left) is reallocated to contribute to the content (right).

Download Full Size | PDF

Similar to Choi et al. [11], we capture a train and a test set comprised of a large number of SLM phase patterns and corresponding amplitude images recorded at a fixed distance. Using a standard stochastic gradient descent (SGD) solver, we fit the model parameters on the training image pairs to learn the calibrated holographic projection model.

After the holographic projection model is learned, we can then optimize for the phase pattern $\phi$ displayed on the SLM that generates a desired target image amplitude $a_{\text {target}}$ by minimizing $\mathcal {L}_{\text {recon}}(s\cdot |f_\text {model}(e^{i\phi }, z) |, a_\text {target})$, where $s$ is a scale factor that is optimized along with $\phi$ and $\mathcal {L}$ is an arbitrary loss function between the reconstructed image and the target image. We use a SGD procedure to optimize for $\phi$, and the update rules of our SGD-based solver in iteration $k$ with learning rate $\alpha$ are

$$\phi^{(k)} = \phi^{(k-1)} - \alpha \left( \frac{\partial \mathcal{L}}{\partial \phi} \right)^T \mathcal{L} \left( s\cdot |f_\text{model}(u_\text{SLM}, z \right) |, a_\text{target}).$$
As demonstrated in Fig. 2, good image quality can be achieved using a simple image reconstruction loss for $\mathcal {L}$, such as an $L_2$ loss, yet much light is distributed outside of the region of interest, limiting the light efficiency of the projector and brightness of the projected images. Here, we propose a light-efficiency loss function $\mathcal {L}_{\text {reg}}$ that enforces light to be constrained within the region of interest to improve the brightness of the projected images. $\mathcal {L}_{\text {reg}}$ enforces the complex field amplitude to exactly match $a_\text {target}$ by a scale factor:
$$\mathcal{L}_{\text{reg}} = L_2 \left( |f(u_\text{SLM}, z)|, \sqrt{\frac{\sum |u_{\text{src}}|^2}{\sum a_\text{target}^2}}a_\text{target} \right) .$$
This enforces that all illumination power $\sum |u_{\text {src}}|^2$ be distributed within the region of interest while maintaining accurate pixel-wise image reconstruction. In practice, $|u_{\text {src}}|^2$ needs to be precisely calibrated and we detail this procedure in Supplement 1.

The total loss of the phase optimization procedure is therefore

$$\mathcal{L} = \mathcal{L}_{\text{recon}} + \gamma \mathcal{L}_{\text{reg}} ,$$
where $\gamma$ determines the weighting between image reconstruction and light-efficiency loss function. Details of the optimization procedure and choosing the appropriate regularization weight are described more in Supplement 1.

Our holographic projector uses a TI DLP6750Q1EVM SLM with a resolution of $1280 \times 800$, a pixel pitch of 10.8 $\mu$m, and a bit depth of 4 bits per pixel. The laser is an FISBA RGBeam fiber-coupled module with three optically aligned laser diodes with a maximum output power of 50 mW and RGB wavelengths of 636.4, 517.7, and 440.8 nm, respectively. A Nikon AF Nikkor 50mm f/1.4D lens is used to magnify the holographic image and project it on a screen 87 cm away from the lens. With this configuration, our projection system provides a large viewing region of red 12 cm $\times$ 21 cm. The SLM phases are optimized separately for each color channel and the captured monochrome images are combined in post-processing to create a full-color image. All holograms are eight-frames time-multiplexed, i.e., eight phase patterns are displayed sequentially to generate a single amplitude image. All images are captured with an FLIR Grasshopper 2.3 MP color USB3 vision sensor. Additional hardware and software implementation details are discussed in Supplement 1.

Probably the most desirable feature of holographic projectors is their ability to redirect illumination power to arbitrary regions within a sub-hologram cone. This property enables high-brightness projection through light reallocation by steering light away from dim image regions to bright image regions without blocking any light. Thus, holography can potentially outperform traditional light-attenuating transmissive displays or self-emissive displays in output illumination power utilization and brightness. This is true especially when displaying extremely sparse content where most pixel values are zero, since most light would then be blocked in light-attenuating displays and self-emissive pixels cannot output their maximum power. This can be seen in Fig. 3—image brightness drastically increases as sparsity increases for holographic projectors but stays constant in traditional displays. Our proposed loss function further improves the brightness at every sparsity level, highlighting our method’s robustness to sparsity. Here, we define sparsity as the ratio of zero-valued pixels to the total number of pixels in an image. These results suggest that holographic projection could be extremely useful in applications such as automotive head-up display (HUD) devices, where most of the displayed content is sparse text and pictograms.

 figure: Fig. 3.

Fig. 3. (a) Quantitative evaluation of image brightness and light efficiency with respect to image sparsity in simulation. For traditional light-attenuating displays, the brightness stays constant as image sparsity increases. On the contrary, there is a hyperbolic increase in image brightness for holographic displays, which is also demonstrated in the experimentally captured images in (b). Although both holographic and conventional displays experience a drop in light efficiency as sparsity increases, holographic displays retain an efficiency of over 70%, while the efficiency of conventional displays goes to nearly 0% for extremely sparse content. Our proposed loss function further improves light efficiency and brightness. Here, light efficiency is defined by the ratio between the total power of the light leaving the display and the total power of the light output by the illumination engine, such as a laser or LCD backlight.

Download Full Size | PDF

To fully unlock the light reallocation capability of holography, the sub-hologram cone of each SLM pixel should ideally fully cover the projection region such that light can be redirected anywhere within. In other words, the holographic image should be formed very far away from the SLM, which means that the propagation distance $z$ should be long ($z \geq 0.6 \; d^\ast$). More discussion regarding the normalized distance unit $d^\ast$ can be found in Supplement 1. This is illustrated in Fig. 4(c), where we show that a long propagation distance is crucial for a holographic display to perform light reallocation. However, we observe from Fig. 4(a) that a long propagation distance results in severe fringe artifacts in the captured SGD-optimized images. This is the reason why most prior works employ a near-SLM setting that performs little to no light reallocation [13,17]. In contrast, the learned wave propagation model in our work fully enables a long propagation distance, as shown in Fig. 4(b) where all artifacts are eliminated. Coupled with our new loss function, we are able to optimize light efficiency and image brightness while maintaining great image quality.

 figure: Fig. 4.

Fig. 4. (a) Experimentally captured projection images optimized using ASM propagation. Severe fringe and chromatic artifacts are visible due to ASM incorrectly modeling long-distance propagation. We also observe a brightness falloff in the peripheral regions due to laser illumination non-uniformity. We report the peak signal-to-noise ratio (PSNR)/Learned Perceptual Image Patch Similarity metric (LPIPS) of reconstructed images. (b) Experimentally captured projection images optimized using a learned wave propagation model. The model learns to correctly compensate for the long-propagation-distance artifacts and the illumination non-uniformity. (c) Quantitative evaluation of the light-efficiency loss function performance with respect to propagation distance in simulation. As propagation increases, the light-efficiency loss function becomes more effective at constraining all the light to the region of interest, improving brightness and light efficiency. This effect is even more significant for sparse images, as the loss function performance improves from nearly 0% (no light reallocation) to 30%.

Download Full Size | PDF

From Figs. 2 and 5, we observe that the light-efficiency loss function successfully constrains light within the projection region and eliminates all stray light outside of the region, effectively improving image brightness by around 30% for all test cases. Furthermore, the pixel-wise and perceptual image reconstruction performance is also maintained when images are optimized with the loss function, showing that a significant amount of light efficiency can be achieved without a large drop in image quality.

 figure: Fig. 5.

Fig. 5. Comparison of experimentally captured images optimized without and with the light-efficiency loss function. We report the image reconstruction quality (PSNR/LPIPS) in blue text and brightness improvement in red. We also show color-mapped zoom-in crops of the luminance channel, where pixel values below 0.5 are clipped to 0.5, and the rest are linearly scaled to [0.5, 1]. By optimizing holograms using our proposed loss function, stray light can be completely eliminated outside of the region of interest, effectively increasing the overall image brightness by around 30%. The LPIPS performance for images optimized with the loss function is retained, indicating that there is no significant drop in perceptual image quality. See Visualization 1 for a split-screen animation comparison.

Download Full Size | PDF

To summarize, we demonstrate that our holographic projection system is capable of displaying images with unprecedented image quality, brightness, and light efficiency. Nevertheless, our method is not without limitations. It takes about 20 minutes to optimize a phase pattern on a single NVIDIA RTX A6000. Although it is possible to train an inverse model that directly maps input images to optimized phases [12,13], we have not attempted to do this in this Letter and we leave it for future work. Currently, the zeroth-order undiffracted light in our system adversely affects the dynamic range of the projected content and reduces image contrast. However, we believe that this challenge can eventually be overcome by using higher-efficiency SLMs or modifying the system to an off-axis setup where the images and the zeroth-order DC term could be separated by applying a phase ramp to the SLM [18,19]. Finally, we have not extended this method to video content, but in theory it can be easily implemented by dynamically adjusting the laser power to account for the different scale factor $s$ values across frames.

The powerful light reallocation mechanism of holography together with AI-driven, fully automatic display calibration techniques has great potential to revolutionize high-brightness projection systems. In this work, we leverage state-of-the-art camera-calibrated wave propagation model algorithms and a novel loss function design to achieve superior image quality, brightness, and light efficiency in a holographic projection system.

Acknowledgments

B.C. is supported by the Stanford Graduate Fellowship (SGF) and the NSF Graduate Research Fellowship. M.G. is supported by the SGF. S.C. is supported by the Meta Research Ph.D. Fellowship and the Kwanjeong Scholarship. We thank Sony and Samsung for helpful discussions and support.

Disclosures

The authors declare no conflicts of interest.

Data availability

All data needed to evaluate the conclusions in the Letter are present in the Letter and/or Supplement 1 and Visualization 1.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. P. Antonis, D. de Boer, R. Koole, S. Kadijk, Y. Li, V. Vanbroekhoven, P. Van De Voorde, and C. G. A. Hoelen, in Sixteenth International Conference on Solid State Lighting and LED-Based Illumination Systems, N. Dietz and I. T. Ferguson, eds. (SPIE, 2017), p. 23.

2. J. Xiong, E.-L. Hsiang, Z. He, T. Zhan, and S.-T. Wu, Light: Sci. Appl. 10, 216 (2021). [CrossRef]  

3. G. Damberg, J. Gregson, and W. Heidrich, ACM Trans. Graph. 35, 24 (2016). [CrossRef]  

4. R. Hoskinson, B. Stoeber, W. Heidrich, and S. Fels, ACM Trans. Graph. 29, 165 (2010). [CrossRef]  

5. D. Chan, S. G. Narasimhan, and M. O’Toole, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2022), pp. 17865–17874.

6. M. Makowski, I. Ducin, M. Sypek, A. Siemion, A. Siemion, J. Suszek, and A. Kolodziejczyk, Opt. Lett. 35, 1227 (2010). [CrossRef]  

7. M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, Opt. Express 20, 25130 (2012). [CrossRef]  

8. E. Buckley, J. Disp. Technol. 7, 135 (2011). [CrossRef]  

9. T. Shimobaba, T. Kakue, and T. Ito, in 2015 IEEE 13th International Conference on Industrial Informatics (INDIN) (IEEE, 2015), pp. 732–741.

10. S. Choi, M. Gopakumar, Y. Peng, J. Kim, and G. Wetzstein, ACM Trans. Graph. 40, 240 (2021). [CrossRef]  

11. S. Choi, M. Gopakumar, Y. Peng, J. Kim, M. O’Toole, and G. Wetzstein, in Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings (ACM, 2022), pp. 1–9.

12. Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, ACM Trans. Graph. 39, 185 (2020). [CrossRef]  

13. L. Shi, B. Li, C. Kim, P. Kellnhofer, and W. Matusik, Nature 591, 234 (2021). [CrossRef]  

14. R. W. Gerchberg and W. O. Saxton, Optik 35, 237 (1972).

15. E. Buckley, A. Cable, N. Lawrence, and T. Wilkinson, Appl. Opt. 45, 7334 (2006). [CrossRef]  

16. J. W. Goodman, Introduction to Fourier Optics, 4th ed. (W.H. Freeman, Macmillan Learning, 2017).

17. A. Maimone, A. Georgiou, and J. S. Kollin, ACM Trans. Graph. 36, 85 (2017). [CrossRef]  

18. C. Chen, D. Kim, D. Yoo, B. Lee, and B. Lee, Opt. Lett. 47, 790 (2022). [CrossRef]  

19. H. Zhang, J. Xie, J. Liu, and Y. Wang, Appl. Opt. 48, 5834 (2009). [CrossRef]  

Supplementary Material (2)

NameDescription
Supplement 1       This supplemental document includes additional details about software/hardware implementations and additional experimental results.
Visualization 1       This supplemental video includes split-screen animations of experimentally captured projection images optimized with and without our proposed light efficiency loss function. Our loss effectively constrains the illumination to the projection region.

Data availability

All data needed to evaluate the conclusions in the Letter are present in the Letter and/or Supplement 1 and Visualization 1.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Holographic projector setup. The holographic image formed on the hologram plane (yellow square) is imaged onto the projection screen.
Fig. 2.
Fig. 2. Proposed holographic projection system that achieves high brightness, light efficiency, and image quality. By optimizing holograms using a camera-calibrated wave propagation model coupled with a new light-efficiency loss function, we are able to constrain illumination power to the projection region (center region inside the thick dark border) and thus increase overall brightness, all the while maintaining a high image quality. (a) Image optimized without the loss function. (b) Image optimized with the loss function. The close-ups demonstrate how light wasted outside the image region (left) is reallocated to contribute to the content (right).
Fig. 3.
Fig. 3. (a) Quantitative evaluation of image brightness and light efficiency with respect to image sparsity in simulation. For traditional light-attenuating displays, the brightness stays constant as image sparsity increases. On the contrary, there is a hyperbolic increase in image brightness for holographic displays, which is also demonstrated in the experimentally captured images in (b). Although both holographic and conventional displays experience a drop in light efficiency as sparsity increases, holographic displays retain an efficiency of over 70%, while the efficiency of conventional displays goes to nearly 0% for extremely sparse content. Our proposed loss function further improves light efficiency and brightness. Here, light efficiency is defined by the ratio between the total power of the light leaving the display and the total power of the light output by the illumination engine, such as a laser or LCD backlight.
Fig. 4.
Fig. 4. (a) Experimentally captured projection images optimized using ASM propagation. Severe fringe and chromatic artifacts are visible due to ASM incorrectly modeling long-distance propagation. We also observe a brightness falloff in the peripheral regions due to laser illumination non-uniformity. We report the peak signal-to-noise ratio (PSNR)/Learned Perceptual Image Patch Similarity metric (LPIPS) of reconstructed images. (b) Experimentally captured projection images optimized using a learned wave propagation model. The model learns to correctly compensate for the long-propagation-distance artifacts and the illumination non-uniformity. (c) Quantitative evaluation of the light-efficiency loss function performance with respect to propagation distance in simulation. As propagation increases, the light-efficiency loss function becomes more effective at constraining all the light to the region of interest, improving brightness and light efficiency. This effect is even more significant for sparse images, as the loss function performance improves from nearly 0% (no light reallocation) to 30%.
Fig. 5.
Fig. 5. Comparison of experimentally captured images optimized without and with the light-efficiency loss function. We report the image reconstruction quality (PSNR/LPIPS) in blue text and brightness improvement in red. We also show color-mapped zoom-in crops of the luminance channel, where pixel values below 0.5 are clipped to 0.5, and the rest are linearly scaled to [0.5, 1]. By optimizing holograms using our proposed loss function, stray light can be completely eliminated outside of the region of interest, effectively increasing the overall image brightness by around 30%. The LPIPS performance for images optimized with the loss function is retained, indicating that there is no significant drop in perceptual image quality. See Visualization 1 for a split-screen animation comparison.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

u z ( x , y , λ ) = f ( u SLM ( x , y , λ ) , z ) , u SLM ( x , y , λ ) = e i ϕ ( x , y , λ ) u src ( x , y , λ ) ,
ϕ ( k ) = ϕ ( k 1 ) α ( L ϕ ) T L ( s | f model ( u SLM , z ) | , a target ) .
L reg = L 2 ( | f ( u SLM , z ) | , | u src | 2 a target 2 a target ) .
L = L recon + γ L reg ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.