Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Learned holographic light transport: invited

Open Access Open Access

Abstract

Computer-generated holography algorithms often fall short in matching simulations with results from a physical holographic display. Our work addresses this mismatch by learning the holographic light transport in holographic displays. Using a camera and a holographic display, we capture the image reconstructions of optimized holograms that rely on ideal simulations to generate a dataset. Inspired by the ideal simulations, we learn a complex-valued convolution kernel that can propagate given holograms to captured photographs in our dataset. Our method can dramatically improve simulation accuracy and image quality in holographic displays while paving the way for physically informed learning approaches.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

The future of human-computer interactions [1] demands technologies that can display life-like 3D visuals. An emerging trend, computer-generated holography (CGH), promises to deliver such realistic visuals in next-generation displays [2]. However, CGH algorithms often fall short of achieving high image quality in real life.

The traditional CGH algorithms such as the Gerchberg–Saxton method [3] or recently trending approaches such as stochastic-gradient-descent-based (SGD-based) differentiable methods [46] can deliver an outstanding image quality in simulation environments. However, in an actual holographic display with phase-only modulation, holograms optimized or learned using these ideal holographic light transport models often fail to deliver the same image quality. Identifying the causes of mismatch and bridging the gap between the image qualities of simulations and actual experiments are growing scientific research trends in the holography community.

The traditional solutions [7] to address the mismatch aims to find complex residual values that can be added as a regularization term to the ideal holographic light transport [6] or the complex hologram [5]. These techniques to regularize holographic image reconstruction models [5,6] are powerful and effective in practice. In the meantime, researchers have also garnered interest to learn the hologram generation process using deep learning [5,8]. However, their proposed solutions often yield highly complex algorithmic structures and sometimes require a physically demanding experimentation routine. These complex algorithmic structures involve learning components such as generative adversarial networks (GANs) that are not straightforward in tuning and training [6], multilayer perceptrons that model the nonlinear response of an SLM, which may carry lesser semantic meaning for an optical scientist [5], or characterizing aberrations with Zernike polynomials that requires careful experimentation [5,912]. We ask ourselves if the demand in experimentation load and complex nature of algorithms can be avoided while optical scientists can get more hints toward understanding imperfections in actual holographic displays. With that question in mind, we aim to derive a new and refined CGH algorithm to improve image quality in actual holographic displays.

This work argues that a tailored holographic light transport model for a target holographic display can account for optical aberrations and bridge the gap between simulations and actual holographic displays. We also argue that such a model can avoid intensive experimentation requirements in display calibration. For this purpose, we propose to learn a single complex-valued point spread function (PSF) that helps us to propagate input phase-only holograms to the target image plane. Thus, our holographic light transport model convolves an input phase-only hologram with a learned complex-valued PSF to get to the physically accurate image reconstructions in simulations for a target holographic display. The learning process involves comparing image reconstruction in simulations against experiments using a camera with an actual holographic display. Like any other learning process, we must have a set of data composed of input phase-only holograms and their corresponding image reconstructions in an actual holographic display. We collect such a dataset from our proof-of-concept display prototype using a camera and an ideal holographic light transport based hologram optimization method that is fully differentiable. We show that our learned holographic light transport can dramatically improve simulation accuracy and final image quality in our holographic display. Our key contributions are:

  • • Learned holographic light transport. We propose a learned approach for holographic light transport to bridge the gap between simulations and experimentation. Our method learns a single complex convolutional kernel to reconstruct images in simulation similar to real experiments. Our implementation is fully differentiable. We show that image quality results from an actual holographic display can be enhanced with our method while the simulations become highly accurate.
  • • Holographic dataset from a proof-of-concept holographic display. To be able to train and derive a single complex convolutional kernel, we build a phase-only holographic display. Then, we capture a series of photographs of holographic image reconstructions resulting from holograms optimized using the ideal holographic light transport.

In the following sections, we will first introduce a standard ideal holographic light transport model. Then, we will provide the details of our experimental setup. Finally, we will introduce our technique in learning and provide quantitative results of our method while comparing it against the ideal case.

2. OPTIMIZING HOLOGRAMS WITH IDEAL HOLOGRAPHIC LIGHT TRANSPORT

The topic of light transport plays a crucial role in formulating the basis of various domains including traditional computer graphics [13], architecture [14], biomedical imaging [15], non-line-of-sight imaging [16], 3D printing [17], visible light communications [18], holographic recording [19], computational displays [20], eye prescription correction [21], eye-gaze tracking [22], and ophthalmology [23], to name a few. Although we cover only display technologies in this work, an accurate representation method of light transport can potentially pave the way toward enhancements in many other highlighted applications.

Light transport models used in CGH is based on Rayleigh–Sommerfeld diffraction integrals [24]. This diffraction integral’s first solution, the Huygens–Fresnel principle, is expressed as

$$u(x,y) = \frac{1}{{j\lambda}}\iint {u_0}(x,y)\frac{{{e^{\textit{jkr}}}}}{r}{\cos}(\theta){\rm{d}}x{\rm{d}}y,$$
where the resultant field $u(x,y)$ is calculated by integrating over every point across hologram plane in XY axes, ${u_0}(x,y)$ represents the optical field in the hologram plane for every point across XY axes, $r$ represents the optical path between a selected point in the hologram plane and a selected point in the target plane, $\theta$ represents the angle between these points, $k$ represents the wavenumber ($\frac{{2\pi}}{\lambda}$), and $\lambda$ represents the wavelength of light. In this model, optical fields ${u_0}(x,y)$ and $u(x,y)$ are represented with a complex value
$${u_0}(x,y) = A(x,y){e^{j\phi (x,y)}},$$
where $A$ represents the spatial distribution of amplitude and $\phi$ represents the spatial distribution of phase across a hologram plane. To simplify our description, we can express the Huygens–Fresnel principle as a superposition of diverging spherical waves originating from a hologram [25]. Perhaps one can also think of the Huygens–Fresnel principle as stamping a complex PSF on a target image plane for each point of a hologram while weighting each stamp with its amplitude and phase from its origin.

Calculating a Huygens–Fresnel approximation by visiting each point on a hologram one by one would consume a large computation and power budget while being slow in processing. Common approaches in the literature [2628] dedicated to near fields (e.g.,  short distances like 10 cm or half a meter) formulates this integral as a convolution operation with a single complex kernel. Hence, the common approaches [29] can be expressed as

$$\begin{split}u(x,y)& = {u_0}(x,y)*h(x,y)\\&= {{\cal F}^{- 1}}({\cal F}({u_0}(x,y)){\cal F}(h(x,y)))\\&= {U_0}({f_x},{f_y})H({f_x},{f_y}),\end{split}$$
where $h$ represents a spatially varying complex convolution kernel. The value of the complex kernel, $h$, is typically expressed as
$$h(x,y) = \frac{{{e^{\textit{jkz}}}}}{{j\lambda z}}{e^{\frac{{jk}}{{2z}}({x^2} + {y^2})}},$$
where $z$ represents the distance between a hologram plane and a target image plane. This ideal model is implemented in a differentiable fashion (refer to odak.learn.wave.classical L81–114) in our fundamental library for optical sciences [30]. The same library hosts differentiable models of various light transport approximations (refer to odak.learn.wave.classical L8–53) [30].

Now that we have established an ideal holographic light transport model in Eq. (3), we can use this holographic light transport model as a forward model that propagates light from a hologram to a target plane. As mentioned earlier, since this model is implemented in code using PyTorch, a modern machine learning library [31], we take advantage of the fact that modern machine learning libraries can automatically differentiate provided functions. Differentiation helps to calculate the complex gradient of our forward model’s error. In simple terms, for each input phase-only hologram, the resulting image reconstruction can be calculated, and the impact of changing phase values on image reconstruction can be precisely estimated using gradients. This fact helps an optimizer have meaningful modifications on the phase values of a phase-only hologram at each optimization step.

To fully realize the described optimization, a loss function is required. For this purpose, we define a loss function $L$ using the least squared error between a reconstructed image at a target plane $u (x, y)$ and a target image $t (x, y)$, so

$$L = (u(x,y) - t(x,y{))^2}.$$

Note that the loss function described here is the simplest case, and we leave customization of this loss function to meet the application’s demands as a future discussion. In addition to a loss function, we would require an optimizer to optimize our phase-only holograms for various targets. We choose to use a SGD-based optimization method [32,33] with a learning rate of 0.1. We ran our optimizer using our ideal forward model for 200 iterations at each hologram calculation. Our hologram optimization method is distributed as a part of our fundamental library for optical sciences [30]. We also provide examples (refer to odak.test.learn_sgd) for using the optimization method within our library [30].

 figure: Fig. 1.

Fig. 1. Schematic diagram of our proof-of-concept holographic display prototype used in our experimental setup.

Download Full Size | PDF

Using the described ideal holographic light transport and hologram optimization methodology, we calculate phase-only holograms of target images from the DIV2K dataset [34]. We resize images in the DIV2K dataset to $1920 \times 1080$ to match the size of our SLM. We also convert those images to monochrome by taking an average across three color channels. Note that all these images in the DIV2K dataset are used only in training (finding the holographic light transport kernel, not in test cases). The calculated image reconstructions perfectly match the target images in simulations; however, they have to be tested against photographs captured from an actual holographic display. Thus, we will explain how we build a proof-of-concept holographic display in the next section before explaining our final methodology to improve visual quality and realism.

3. PROOF-OF-CONCEPT HOLOGRAPHIC DISPLAY

We build a proof-of-concept holographic display to assess the image quality of our hologram optimization methods that uses ideal holographic light transport. We will introduce our learned holographic light transport in the next section, and we will also use the same proof-of-concept holographic display to assess the image quality of our methodology.

The optical layout of our proof-of-concept holographic display is shown in Fig. 1. Following the light from its source, the optical assembly of our proof-of-concept holographic display uses a multiwavelength laser light source, LASOS MCS4. However, for our experimentation, we only rely on the working wavelength of 515 nm. A Thorlabs LB1945-A bi-convex lens with a 200 mm focal length lens collimates the output beam of our laser light source. The collimated beam goes through a wire grid linear polarizer (Thorlabs LPVISE100-A) to maintain a polarization aligned with our phase-only spatial light modulator’s (SLM) fast axis. The linearly polarized collimated beam bounces off an anti-reflection coated Pellicle beamsplitter (Thorlabs BP245B1) toward our 0.90 deg tilted phase-only SLM (Holoeye Pluto 2.0, tilted half order). To avoid undiffracted light, we add a horizontal grating to the displayed holograms on our SLM. The horizontally grated hologram ${u^\prime _0}$ can be calculated as

$${u^\prime _0}(x,y) = \left\{{\begin{array}{*{20}{c}}{{e^{- j(\phi (x,y) + \pi)}}}&{{\rm{for}} \quad x= {\rm{odd}}}\\{{e^{- j\phi (x,y)}}}&{{\rm{for}}\quad x= {\rm{even}}}\end{array}} \right.,$$
where $\phi$, the original phase of ${u_0}$, is modified. In this way, we steer the location of the reconstructed image in space away from undiffracted light. The tilt angle of our SLM calculated using the diffraction equation is formulated as
$$m\lambda = \Delta a \sin(\theta),$$
where $m$ is the half-order (0.5), $\Delta a$ is pixel pitch of a SLM, and the $\theta$ is the angular location of the grated hologram plane. For our system, $\theta$ is calculated as 1.80°. Therefore, the required tilt angle for the SLM is $\frac{\theta}{2} \approx {0.90°}$. In the rest of the setup, the phase-modulated beam goes through the Pellicle beamsplitter. In the next stage, the beam passes the focusing lenses, a combination of Thorlabs LA1908-A and LB1056-A. A pinhole aperture (Thorlabs SM1D12) follows the lenses at the focal distance of the focusing lenses to avoid undiffracted light. We capture the image reconstructions of our hologram dataset optimized using ideal holographic light transport from our setup with a lensless image sensor (Point Grey GS3-U3-23S6M-C USB 3.0). For each captured image reconstruction, we applied homography correctionso we can compare it against a ground-truth image or a simulated reconstruction. The holograms in our work are always reconstructed for a target image plane at 7 cm away from our proof-of-concept holographic display.

4. LEARNED HOLOGRAPHIC LIGHT TRANSPORT

We provide sample photographs showing image reconstructions captured from our proof-of-concept holographic display in Fig. 2. These photographs are a result of holograms optimized using the ideal holographic light transport model. We also provide input holograms and their simulated results for comparison. The visual mismatch between photographs and simulated results provides a good understanding of the image quality issues discussed earlier.

 figure: Fig. 2.

Fig. 2. Mismatch between simulated and experimental results when using ideal holographic light transport. For a given (a) phase-only hologram, a simulated result can provide (b) a perfect image reconstruction. (c) For the same hologram, a real holographic display fails to achieve the image reconstructions we show in (Dataset 1, Ref. [35]).

Download Full Size | PDF

To combat this mismatch illustrated in Fig. 2, we take advantage of our dataset of photographs from the proof-of-concept prototype and their corresponding optimized holograms that used the ideal holographic light transport model (Dataset 1, Ref. [35]). With a SGD-based optimization method [32,33] and a learning rate of 0.002, we set to learn a complex kernel, ${h_l}(x,y)$ using the loss function in Eq. (5) that will replace the original $h(x,y)$ from the ideal case. This newly optimized ${h_l}(x,y)$ can be best described as a transfer function that takes an ideal input hologram and provides an image reconstruction similar to the captured photographs in our dataset. The code base of our learning process follows the same optimization described in Section 2 (refer to realistic_holography:optics L87-L137) [36]. The phase and amplitude of the learned complex kernel ${h_l}(x,y)$ and the ideal complex kernel $h(x,y)$ are provided in Fig. 3 for comparison.

 figure: Fig. 3.

Fig. 3. Phase and amplitude comparison between complex kernels used in (a) an ideal holographic light transport and (b) a learned holographic light transport.

Download Full Size | PDF

5. EVALUATION

Now that we have a learned transfer function ${h_l}(x,y)$ shown in Fig. 3, we look into how this kernel representing the learned holographic light transport can help us to optimize new holograms. Assume that the learned kernel is more realistic than the original ideal kernel. In that hypothesis, the optimized holograms should lead to image reconstruction results better in terms of image quality in the experimental case. Meanwhile, we should also expect that the mismatch between the simulations and experiment cases to be mitigated. We challenge these assumptions by optimizing holograms using ${h_l}(x,y)$ instead of $h(x,y)$. In our exploration to optimize holograms using the learned holographic light transport, we rely on the same process described in Section 2 (refer to realistic_holography:optics L45–85) [36].

A. Image Quality

We provided a visual comparison between holograms generated using the ideal holographic light transport and learned holographic light transport in Fig. 4. The visual quality of the reconstructed images in our proof-of-concept holographic display using the learned holographic light transport shows a significant improvement over the ideal case. We believe this is because imperfections in our proof-of-concept holographic display are accounted for in our transfer function. We kindly invite the readers to observe the visual difference between the ideal transfer function and the learned transfer function provided in Fig. 3. Please note the asymmetry in the learned kernel, which does not exist in the case of an ideal kernel.

 figure: Fig. 4.

Fig. 4. Visual comparison between (a) an ideal holographic light transport and (b) a learned holographic light transport in reconstructing images. Both of the photographs are captured with optimized holograms using corresponding holographic light transport models and our proof-of-concept prototype. Note that target image in both cases are not used in our training set (DIV2K [34]).

Download Full Size | PDF

B. Mismatch between Simulations and Experiments

Our learned holographic light transport can approximate a transfer function of our proof-of-concept display accurately [Training L2 loss: 0.0028 and test loss: 0.0034 (learned reconstruction versus captured ground truth images); note that images are normalized between zero and one]. We compare image reconstructions from our simulations with our experimental results from our proof-of-concept holographic display to provide evidence that this is the case. This comparison is sampled in Fig. 5. Our simulations’ brightness and contrast levels with learned holographic light transport do not truly match our photographs from our experimentation. However, the spatial content in experimental cases resembles the simulated reconstructions closely and even gives us an excellent hint about what to expect in terms of visual quality from a given holographic display. If further tweaking is needed, the brightness mismatch in the ideal and learned cases can be improved by following a manual calibration routine. In the supplementary documentation of the work done by Choi et al. [37], curious readers can find highly detailed documentation to minimize the average difference between a simulation and a physical prototype by adjusting the laser power and exposure time. We have not conducted such a calibration for this work because we wanted to show the improvement over an uncalibrated system.

 figure: Fig. 5.

Fig. 5. (a) Learned simulation versus (b) real photograph. The ideal light transport based hologram optimization estimates unrealistic results in simulation. (c) On the other hand, for a given target image, simulations based on learned holographic light transport closely resemble the experimental results.

Download Full Size | PDF

C. What Did We Learn from the Learned Kernel?

The holographic light transport kernel learned within this work, as shown in Fig. 3, indicates that the phase and amplitude behavior of our physical light source is not homogenous in terms of angular emission. Readers may observe this fact by carefully checking the asymmetry of the kernel in Fig. 3. The amplitude values are far greater than the ideal kernel, thus suggesting that to get to brighter images, the hologram optimization must consider this correlation. This fact can be observed in Fig. 4 as the dynamic range, and brightness levels are better preserved in the learned method. Note that the learned kernel is the PSF of the given holographic display. Thus, resolution characteristics of the holographic display also can be analyzed in the future by studying the limits of a learned PSF. Finally, note that a single kernel can only capture a global mean of a general trend in a holographic display. We will discuss how to improve our learned method in the future in the final paragraph of this section.

D. Comparison to the State of the Art

The leading state-of-the-art methods [5,6] that bridge the gap between simulations and physical holographic displays consists of convolutional neural networks. Specifically, the work by Peng et al. [5] relies on more than 8 million parameters to tune in a training process of neural networks. Many parameters and layers are needed to efficiently realize the correlation between a hologram and a final reconstructed image. Otherwise, the connections between the pixels of a hologram and a final reconstructed image may not be fully identified (locality issue). This locality issue arises from the fact that such models use small kernel sizes. In contrast, our work decreases this number of tunable parameters to half [4 million parameters (${{2}} \times {{1080}} \times {{1920}}$–amplitude and phase], while relying on a kernel size that is the same as an input image, which avoids locality issue. The work by Maimone et al. [38] uses 1D separable functions to reduce the memory footprint in classical CGH [38] for ideal complex forms such as quadratic phase functions. Drawing inspiration from that work, we speculate that further reduction may be possible by storing a parametric form of a learned complex kernel.

The readers of our work may ask if our approach is the solution that could provide the most remarkable accuracy in bridging the gap between simulations and experiments in holography. We followed a similar approach to the classical model, where a single convolutional kernel formulates the transfer function of light transport. Hence, our approach is accurate as long as a single kernel is reliable to describe the light transport. To improve accuracy further and to have one-to-one matching simulations in the future, we speculate that approaches with spatially varying convolutional kernels can provide more capacity to accommodate a genuinely realistic simulation. On the other hand, we learn the light transport between a hologram and an image plane. Approaches that provide 3D image reconstructions in CGH require a transfer function representing the relationship between a single hologram and multiple image planes. In our approach, we must learn the kernel for each plane using a set of images. In the future, a complete form of our approach can potentially be derived where a data set with diverse image reconstruction distances help to learn a parametric light transport rather than per plane learning. We believe our work does an excellent job capturing optical aberrations and imperfections in a holographic display. Our work can be best described as the simplest form of improving realism in CGH algorithms without dealing with complex experimentation or complex algorithmic approaches.

6. CONCLUSION

Holographic displays often require a tedious effort to optimize holograms for the best possible image quality. We propose, what we believe, to the best of our knowledge, is a simple new learned method to address this issue with holographic displays. The core of our approach is in a learning procedure that approximates an accurate holographic light propagation model for a given actual holographic display. With this approach, we can optimize holograms that can dramatically improve image quality concerning a typical ideal holographic light transport model. Our method, in turn, enables a simple, yet effective, method that does not suffer from the overhead of deriving complex algorithmic approaches while paving the way toward physically informed learning approaches in the holography domain.

Acknowledgment

The authors thank the anonymous reviewers for their useful feedback. The authors also thank Oliver Kingshott and Duygu Ceylan for the fruitful and inspiring discussions improving the outcome of this research, and Selim Ölçer for helping with the fiber alignment of laser light source in the proof-of-concept display prototype.

Disclosures

The authors declare no conflicts of interest.

Data Availability

The generated dataset of this work is available in Dataset 1, Ref. [35]. The code base discussed in Sections 2 and 4 is available in Refs. [30,36].

REFERENCES

1. J. Orlosky, M. Sra, K. Bektaş, H. Peng, J. Kim, N. Kos’myna, T. Hollerer, A. Steed, K. Kiyokawa, and K. Akşit, “Telelife: the future of remote living,” arXiv preprint arXiv:2107.02965 (2021).

2. G. A. Koulieris, K. Akşit, M. Stengel, R. K. Mantiuk, K. Mania, and C. Richardt, “Near-eye display and tracking technologies for virtual and augmented reality,” in Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 493–519.

3. G.-Z. Yang, B.-Z. Dong, B.-Y. Gu, J.-Y. Zhuang, and O. K. Ersoy, “Gerchberg–Saxton and Yang–Gu algorithms for phase retrieval in a nonunitary transform system: a comparison,” Appl. Opt. 33, 209–218 (1994). [CrossRef]  

4. C. Chen, B. Lee, N.-N. Li, M. Chae, D. Wang, Q.-H. Wang, and B. Lee, “Multi-depth hologram generation using stochastic gradient descent algorithm with complex loss function,” Opt. Express 29, 15089–15103 (2021). [CrossRef]  

5. Y. Peng, S. Choi, N. Padmanaban, and G. Wetzstein, “Neural holography with camera-in-the-loop training,” ACM Trans. Graph. 39, 1–14 (2020). [CrossRef]  

6. P. Chakravarthula, E. Tseng, T. Srivastava, H. Fuchs, and F. Heide, “Learned hardware-in-the-loop phase retrieval for holographic near-eye displays,” ACM Trans. Graph. 39, 1–18 (2020). [CrossRef]  

7. R. Li and L. Cao, “Progress in phase calibration for liquid crystal spatial light modulators,” Appl. Sci. 9, 2012 (2019). [CrossRef]  

8. J. Wu, K. Liu, X. Sui, and L. Cao, “High-speed computer-generated holography using an autoencoder-based deep neural network,” Opt. Lett. 46, 2908–2911 (2021). [CrossRef]  

9. T. Zhao, J. Liu, X. Duan, Q. Gao, J. Duan, X. Li, Y. Wang, W. Wu, and R. Zhang, “Multi-region phase calibration of liquid crystal SLM for holographic display,” Appl. Opt. 56, 6168–6174 (2017). [CrossRef]  

10. B. Zhang, Y. Chen, and R. Feng, “A calibration method for phase-only spatial light modulator,” in Progress In Electromagnetics Research Symposium-Spring (PIERS) (IEEE, 2017), pp. 133–135.

11. G. Krasin, N. Stsepuro, I. Gritsenko, and M. Kovalev, “Holographic method for precise measurement of wavefront aberrations,” Proc. SPIE 11774, 1177407 (2021). [CrossRef]  

12. X. Xun and R. W. Cohn, “Phase calibration of spatially nonuniform spatial light modulators,” Appl. Opt. 43, 6400–6406 (2004). [CrossRef]  

13. J. Vorba, O. Karlk, M. Šik, T. Ritschel, and J. Křivánek, “On-line learning of parametric mixture models for light transport simulation,” ACM Trans. Graph. 33, 1–11 (2014). [CrossRef]  

14. M. Ayoub, “A review on light transport algorithms and simulation tools to model daylighting inside buildings,” Solar Energy 198, 623–642 (2020). [CrossRef]  

15. J. Jönsson and E. Berrocal, “Multi-Scattering software: part I: online accelerated Monte Carlo simulation of light transport through scattering media,” Opt. Express 28, 37612–37638 (2020). [CrossRef]  

16. S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: a Huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27, 29380–29400 (2019). [CrossRef]  

17. T. Rittig, D. Sumin, V. Babaei, P. Didyk, A. Voloboy, A. Wilkie, B. Bickel, K. Myszkowski, T. Weyrich, and J. Křivánek, “Neural acceleration of scattering-aware color 3D printing,” in Computer Graphics Forum (Wiley Online Library, 2021), Vol. 40, pp. 205–219.

18. G. Corbellini, K. Aksit, S. Schmid, S. Mangold, and T. R. Gross, “Connecting networks of toys and smartphones with visible light communication,” IEEE Commun. Mag. 52(7), 72–78 (2014). [CrossRef]  

19. C. Jang, O. Mercier, K. Bang, G. Li, Y. Zhao, and D. Lanman, “Design and fabrication of freeform holographic optical elements,” ACM Trans. Graph. 39, 1–15 (2020). [CrossRef]  

20. K. Akşit, “Patch scanning displays: spatiotemporal enhancement for displays,” Opt. Express 28, 2107–2121 (2020). [CrossRef]  

21. P. Chakravarthula, D. Dunn, K. Akşit, and H. Fuchs, “FocusAR: auto-focus augmented reality eyeglasses for both real world and virtual imagery,” IEEE Trans. Vis. Comput. Graph. 24, 2906–2916 (2018). [CrossRef]  

22. R. Li, E. Whitmire, M. Stengel, B. Boudaoud, J. Kautz, D. Luebke, S. Patel, and K. Akşit, “Optical gaze tracking with spatially-sparse single-pixel detectors,” in IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (IEEE, 2020), pp. 117–126.

23. G. Aydındoğan, K. Kavaklı, A. Şahin, P. Artal, and H. Ürey, “Applications of augmented reality in ophthalmology,” Biomed. Opt. Express 12, 511–538 (2021). [CrossRef]  

24. J. C. Heurtley, “Scalar Rayleigh–Sommerfeld and Kirchhoff diffraction integrals: a comparison of exact evaluations for axial points,” J. Opt. Soc. Am. 63, 1003–1008 (1973). [CrossRef]  

25. J. W. Goodman, Introduction to Fourier Optics (Roberts & Co., 2005).

26. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662–19673 (2009). [CrossRef]  

27. W. Zhang, H. Zhang, and G. Jin, “Band-extended angular spectrum method for accurate diffraction calculation in a wide propagation range,” Opt. Lett. 45, 1543–1546 (2020). [CrossRef]  

28. W. Zhang, H. Zhang, and G. Jin, “Adaptive-sampling angular spectrum method with full utilization of space-bandwidth product,” Opt. Lett. 45, 4416–4419 (2020). [CrossRef]  

29. M. Sypek, “Light propagation in the Fresnel region–New numerical approach,” Opt. Commun. 116, 43–48 (1995). [CrossRef]  

30. K. Akşit, A. S. Karadeniz, P. Chakravarthula, W. Yujie, K. Kavaklı, Y. Itoh, and D. R. Walton, “Odak 0.1.9,” Zenodo, 2021, https://doi.org/10.5281/zenodo.5526684.

31. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” presented at NIPS 2017 Workshop on Autodiff, Long Beach, California, 9 December 2017.

32. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

33. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101 (2017).

34. A. Ignatov, R. Timofte, T. V. Vu, et al., “PIRM challenge on perceptual image enhancement on smartphones: report,” in European Conference on Computer Vision (ECCV) Workshops (2019).

35. K. Kavaklı, H. Urey, and K. Akşit, “Phase-only holograms and captured photographs,” University College London, v. 1, 2021https://doi.org/10.5522/04/15087867.v1.

36. K. Kavaklı, H. Urey, and K. Akşit, “Realistic holography,” v. 0.1, GitHub, 2021, https://github.com/complight/realistic_holography.

37. S. Choi, J. Kim, Y. Peng, and G. Wetzstein, “Optimizing image quality for holographic near-eye displays with Michelson holography,” Optica 8, 143–146 (2021). [CrossRef]  

38. A. Maimone, A. Georgiou, and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36, 1–16 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Dataset 1       Phase only holograms and their captures.

Data Availability

The generated dataset of this work is available in Dataset 1, Ref. [35]. The code base discussed in Sections 2 and 4 is available in Refs. [30,36].

35. K. Kavaklı, H. Urey, and K. Akşit, “Phase-only holograms and captured photographs,” University College London, v. 1, 2021https://doi.org/10.5522/04/15087867.v1.

30. K. Akşit, A. S. Karadeniz, P. Chakravarthula, W. Yujie, K. Kavaklı, Y. Itoh, and D. R. Walton, “Odak 0.1.9,” Zenodo, 2021, https://doi.org/10.5281/zenodo.5526684.

36. K. Kavaklı, H. Urey, and K. Akşit, “Realistic holography,” v. 0.1, GitHub, 2021, https://github.com/complight/realistic_holography.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic diagram of our proof-of-concept holographic display prototype used in our experimental setup.
Fig. 2.
Fig. 2. Mismatch between simulated and experimental results when using ideal holographic light transport. For a given (a) phase-only hologram, a simulated result can provide (b) a perfect image reconstruction. (c) For the same hologram, a real holographic display fails to achieve the image reconstructions we show in (Dataset 1, Ref. [35]).
Fig. 3.
Fig. 3. Phase and amplitude comparison between complex kernels used in (a) an ideal holographic light transport and (b) a learned holographic light transport.
Fig. 4.
Fig. 4. Visual comparison between (a) an ideal holographic light transport and (b) a learned holographic light transport in reconstructing images. Both of the photographs are captured with optimized holograms using corresponding holographic light transport models and our proof-of-concept prototype. Note that target image in both cases are not used in our training set (DIV2K [34]).
Fig. 5.
Fig. 5. (a) Learned simulation versus (b) real photograph. The ideal light transport based hologram optimization estimates unrealistic results in simulation. (c) On the other hand, for a given target image, simulations based on learned holographic light transport closely resemble the experimental results.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

u(x,y)=1jλu0(x,y)ejkrrcos(θ)dxdy,
u0(x,y)=A(x,y)ejϕ(x,y),
u(x,y)=u0(x,y)h(x,y)=F1(F(u0(x,y))F(h(x,y)))=U0(fx,fy)H(fx,fy),
h(x,y)=ejkzjλzejk2z(x2+y2),
L=(u(x,y)t(x,y))2.
u0(x,y)={ej(ϕ(x,y)+π)forx=oddejϕ(x,y)forx=even,
mλ=Δasin(θ),
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.