Abstract

Computer generated holography (CGH) algorithms come in many forms, with different trade-offs in terms of visual quality and calculation speed. However, no CGH algorithm to date can accurately account for all 3D visual cues simultaneously, such as occlusion, shadows, continuous parallax, and precise focal cues, without view discretization. The aim is to create photorealistic CGH content, not only for display purposes but also to create reference data for comparing and testing CGH and compression algorithms. We propose a novel algorithm combining the precision of point-based CGH with the accurate shading and flexibility of ray-tracing algorithms. We demonstrate this by creating a scene with global illumination, soft shadows, and precise occlusion cues, implemented with OptiX and CUDA.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Holographic display technology can faithfully reconstruct the wavefield of light, making it the ultimate 3D display technology [1]: since it can account for all visual cues, virtual objects can appear exactly like real ones to the human eye. To achieve this goal, it does not suffice only to have the right display technology, but the holographic input signal fed to the display should encode realistic-looking objects as well.

The aim of computer generated holography (CGH) is to design algorithms that optimize both realism and computational efficiency of the generated holograms. The main challenge for CGH is that the wave-based nature of holography implies that every point in 3D space can potentially affect every hologram pixel. This many-to-many relationship, compounded with the large resolutions needed for holographic displays, makes brute-force calculations generally too slow for most applications.

CGH algorithms can be classified depending on their basic logical constituents representing the 3D scene [1,2]: there are point cloud methods [35], polygon methods [68], layer-based methods [911], and ray-based or holographic stereogram methods [1217]. Some algorithms belong in multiple categories, each having different trade-offs w.r.t. computational efficiency, range of supported graphical effects, and types of supported scenes (3D geometry and extent, viewing angles, etc.).

One of the most widely used techniques is point cloud CGH. It consists of summing all the point spread functions (PSFs) emanating from a collection of self-luminous points in 3D space. Because the holographic PSF is a mathematically exact expression, it provides perfect 3D cues (eye focusing, continuous views, sharpness, etc.) for individual points. Moreover, the simple expression makes it highly suited for integration in parallel computing implementations and specialized hardware for real-time hologram calculation [18] and algorithmic accelerations such as sparse CGH [1921]. However, the standard 3D point cloud formulation does not support many basic effects such as occlusion and shading. These have been partially addressed by modulating the PSF zone plate [22], or by using pre-computed light distributions with look-up tables and assigning small volumes to every point blocking light [10].

Holographic signals encode coherent waves, so they differ fundamentally from ray-based light models, and thus many of the lighting effects have to be redesigned for CGH. Presently the achieved realism is still limited compared to its state-of-the-art image rendering counterpart in computer graphics. That is why several attempts have been made to integrate existing computer-graphics systems, utilizing their generated output to create CGH aiming for higher realism.

These hybrid systems almost invariably consist of holographic stereograms: the hologram is subdivided into pieces, often called hogels or elementary holograms, where each piece corresponds to a small view with a perspective (pinhole) or an orthographic projection. The rendered views are then combined with random phase or depth maps to create the holographic stereograms. They can be viewed as techniques that convert discrete light fields into holograms. Examples include utilizing ray-tracing for generating the hogel views [23], creating fully computed stereograms from rendered views and depth maps [16], or converting light fields into holograms via block-wise FFTs and random phase modulation, called “ray-wavefront conversion” [15,17].

However, one important limitation of these approaches is the lack of phase coherence between hogels: the many rays emitted from different positions and angles will almost never exactly hit the same point in the scene, and since the phase is extremely sensitive to minor depth variations, the phase of the resulting holographic signal will be mismatched across hogels (cf. Fig. 1) for the same scene object positions. This limits the achievable visual quality. Furthermore, views across hogels are discretized, although holograms support continuous views in principle, and computing holographic stereograms imposes restrictions on possible scene geometries.

 figure: Fig. 1.

Fig. 1. Diagram of how the direction of ray-tracing affects the calculated hologram. (a) Forward ray tracing. Rays are traced from a single point to all hologram pixels, leading to a coherent PSF. (b) Backward ray tracing. The individual hogels in stereogram methods trace rays forward that do not exactly hit the same point in the 3D scene, leading to discontinuities across hogels and degrading visual quality.

Download Full Size | PPT Slide | PDF

To address this limitation, we propose a hybrid method that takes the best aspects of both the point-cloud methods and ray-tracing (cf. Fig. 2). Instead of directly calculating the light field in the hologram plane, the local bidirectional reflectance distribution function (BRDF) is computed in every point of the point cloud in scene space, modulating the standard PSF expression. This achieves realistic lighting effects, without compromising the phase coherence, resulting in continuous views and accurate depth cues.

 figure: Fig. 2.

Fig. 2. Examples of rendered views using (a) conventional point-based CGH and (b) stereogram CGH.

Download Full Size | PPT Slide | PDF

To compute a hologram, we use the laws of diffraction given by the Huygens–Fresnel principle [24]: it expresses how to calculate the complex-valued amplitude of any point ${\textbf{p}}$ on a hologram $H$, given a collection of surfaces $S$, integrating $\forall {\textbf{x}} \in S$. We propose a generalization, defined as

$$H(\textbf{p})=\iint_{S}A(\textbf{x},\textbf{p})\exp (ik\parallel \textbf{p}-\textbf{x}\parallel )\,\textbf{n}\cdot \langle \textbf{p}-\textbf{x}\rangle {\rm d}\textbf{x},$$
where ${\textbf{n}}$ is the surface normal of $S$ in ${\textbf{x}}$. $k = 2\pi /\lambda$ is the wave number, where $\lambda$ is the wavelength of the light, and $i$ is the imaginary unit. $\parallel \cdot \parallel$ is the Euclidean norm, and $\langle \cdot \rangle$ is the normalization operator, so that $\langle {\textbf{a}}\rangle = \frac{{\textbf{a}}}{{\parallel {\textbf{a}}\parallel}}$.

Unlike the conventional Huygens–Fresnel definition, we have that $A({\textbf{x}},{\textbf{p}}) \ne A({\textbf{x}})$, so points do not have to emit the same amount of light in all directions. Furthermore, we omit the constant factor $1/i\lambda$, which does not affect the rendered hologram.

To numerically evaluate this integral, the expression in (1) must be discretized. Points ${\textbf{p}}$ are sampled on a regular grid representing the hologram’s pixels, equispaced by a distance $p$ called the pixel pitch. The surfaces $S$ (e.g., 3D shapes or polygon meshes) in the scene are randomly sampled resulting in a discrete set of points $\bar S$, containing $\# \bar S = N$ points. Here, ${\textbf{x}}$ represents samples rather than spherical wave solutions of self-luminous points, so the oblique factor ${\textbf{n}} \cdot \langle {\textbf{p}} - {\textbf{x}}\rangle$ is kept.

The main challenge is to accurately compute the PSF modulation function $A$. We define it to be

$$A({\textbf{x}},{\textbf{p}}) = B({\textbf{x}},{\textbf{p}}) \cdot \Phi ({\textbf{x}}),$$
where $\Phi :{\mathbb R} \to {\mathbb T}$ is a random phase function, so that $\Phi ({\textbf{x}}) = \exp (i\varphi ({\textbf{x}}))$, where $\varphi ({\textbf{x}}) \in {\cal U}(0,2\pi)$, i.e., the uniform distribution between 0 and $2\pi$. Here, $B:{{\mathbb R}^2} \to {\mathbb R}$ defines the amount of light emitted from a point ${\textbf{x}}$ to a hologram pixel ${\textbf{p}}$, which is equivalent to the BRDF definition. We note that $B: {{\mathbb R}^2} \to {\mathbb C}$ and more complex phase distributions for $\Phi$ can be chosen to model even more kinds of light interactions and phenomena.

Directly computing $B$ for all pairs of points $\{{\textbf{x}},{\textbf{p}}\}$ would be highly redundant and thereby inefficient. Instead, the rendering happens in three phases. The first phase consists of an optimized random point sampling. In the second phase, a simplified representation of $B$ is computed for every point ${\textbf{x}} \in \bar S$. Then, in the third phase, the PSFs are modulated by the ray-traced BRDF, including occlusions and aliasing considerations. Since the equation $\lambda \nu = \sin (\theta)$ relates the incidence angle of light $\theta$ w.r.t. the hologram normal, with the induced spatial frequency of the holographic signal $\nu$, care should be taken not to directly evaluate Eq. (1) at an angle where $\nu$ surpasses the hologram’s sampling rate; otherwise, this causes aliasing, leading to erroneous signal values.

Given a budget of $N$ points, we would like to optimize the sampling of the surfaces $S$ taking into account their relative distance and orientation w.r.t. the hologram plane. We want to avoid sampling surfaces overly densely that are far away and/or tilted away at a steep angle from the hologram plane. This is achieved by shooting sampling rays from the tip of the aliasing-free cone [1] randomly towards the hologram surface. If an object is hit, the point is added to the list. It is important to make the rays “piercing,” i.e., ignore occlusions as shown in Fig. 3(b); otherwise, we would sample partially occluded surfaces less frequently, leading to incorrect results. Because we have a continuum of viewpoints on the hologram surface instead of a pinhole camera, the conventional condition for occlusion culling has to be modified. Only surfaces whose normal ${\textbf{n}}$ satisfies

$${\textbf{u}} \cdot {\textbf{n}} \lt \sin ({\theta _{{\rm max}}}) = \lambda {\nu _{{\rm max}}} = \frac{\lambda}{{2p}}$$
at the sampling point are taken into consideration. ${\textbf{u}}$ is the hologram normal pointing to the scene, and ${\theta _{{\rm max}}}$ is the maximum diffraction angle determined by the Nyquist rate ${\nu _{{\rm max}}}$ given by the grating equation. Choosing $N$ optimally does depend on many factors, such as the desired quality, hologram parameters, texture detail, material properties and light source placement; this is currently considered out of scope.
 figure: Fig. 3.

Fig. 3. Illustration of the rendering phases. (a) Scene geometry: sampling rays originate from the aliasing-free cone tip. (b) Sampling calculation phase: randomly shot “piercing” objects to avoid undersampling of partially occluded surfaces. (c) BRDF calculation phase: red rays are those traced to the light source, and blue ones are randomly sampled from the hemisphere around ${\textbf{p}}$ to obtain global illumination. Only rays lying in the cone of interest, given by Eq. (6), are added to $L^\prime ({\textbf{x}})$. The other (dotted) rays are omitted. (d) PSF calculation phase: visibility rays are traced from the point to the hologram plane within the aliasing-free PSF region. $A({\textbf{x}},{\textbf{p}})$ is computed only for pixels corresponding to non-occluded rays.

Download Full Size | PPT Slide | PDF

In this simulation, every point ${\textbf{x}} \in S$ has a material characterized by a diffuse reflection constant ${K_d}({\textbf{x}})$, a specular reflection constant ${K_s}({\textbf{x}})$, and a shininess constant $\alpha ({\textbf{x}})$ according to the Phong reflection model. No ambient term is needed because we use the more accurate global illumination model.

Two sets of radiance rays are traced per point [Fig. 3(c)] to obtain a model of the BRDF per point. One set of rays samples the area lights, subdividing them into squares and tracing one ray to a random position within each square. The second set uniformly randomly samples the hemisphere on the polygon surface to obtain information on the global illumination. The set of all traced radiance rays per point ${\textbf{x}}$ is denoted $L({\textbf{x}})$, with a constant predetermined size $\# L({\textbf{x}}) = {n_L}$.

These rays are used to approximately model the BRDF $B({\textbf{x}},{\textbf{p}})$, encoded by two components: a constant term ${B_d}({\textbf{x}})$, representing the diffuse light emission strength in all directions, and a bundle of light vectors $L^\prime ({\textbf{x}}) \subseteq L({\textbf{x}})$ to compute the viewpoint-dependent (and thus hologram-pixel-dependent) ${B_s}({\textbf{x}},{\textbf{p}})$, such as specular reflections. Note that generally $L^\prime ({\textbf{x}}) \ne L({\textbf{x}})$, because in practice, many of the traced rays have no (noticeable) effect on ${B_s}$ for various reasons: their incidence angle, low light intensity, small ${K_s}$, or the value of $\alpha$. These can thus be omitted from the bundle to save calculation time.

For every traced ray $\ell$, whose $\parallel \ell \parallel$ is proportional to the light intensity, the diffuse term is accumulated:

$${B_d}({\textbf{x}}) = \sum\limits_{\ell \in L({\textbf{x}})} \max (0,{K_d}({\textbf{x}}) \cdot (\ell \cdot {\textbf{n}})).$$
A ray $\ell$ is added to the bundle $L^\prime ({\textbf{x}})$ only if its maximum effect on the hologram surpasses a certain threshold $T$. Given the halfway vector ${\textbf{h}} = \langle {\langle \ell \rangle - {\textbf{u}}} \rangle$, the cosine of the angle with the maximum specular light strength $c$ visible from the hologram is
$$c = \left\{{\begin{array}{*{20}{l}}{1,}&{{\rm if}\;({\textbf{h}} \cdot {\textbf{n}}) \gt \cos ({\theta _{{\rm max}}})}\\{\cos ({\rm a \cos}({\textbf{h}} \cdot {\textbf{n}}) - {\theta _{{\rm max}}}),}&{{\rm otherwise}}\end{array}} \right..$$
The light vector $\ell$ is added to the bundle $L^\prime ({\textbf{x}})$ only if the following equality is satisfied:
$${K_s}({\textbf{x}}) \cdot \parallel \ell \parallel \cdot {c^{\alpha ({\textbf{x}})}} \gt T.$$

Once all points ${\textbf{x}}$ with their associated ${B_d}$ and bundles have been calculated, the complex-valued amplitudes on the hologram plane can be computed.

Rays from point ${\textbf{x}}$ are traced to hologram pixel ${\textbf{p}}$ only if the incidence angle does not surpass the maximum diffraction angle ${\theta _{{\rm max}}}$ given by the grating equation. Skipping these calculations also speeds up the algorithm. For every such pixel ${\textbf{p}}$, a visibility ray is traced to ${\textbf{x}}$ to see whether it is occluded [cf. Fig. 3(d)]. If not, the complex-valued amplitude of hologram pixel $H({\textbf{p}})$ is incremented using

$$B({\textbf{x}},{\textbf{p}}) = {B_d}({\textbf{x}}) + \sum\limits_{\ell \in L^\prime ({\textbf{x}})} {K_s}({\textbf{x}}) \cdot \parallel \ell \parallel \cdot {({\textbf{h}}({\textbf{p}}) \cdot {\textbf{n}})^{\alpha ({\textbf{x}})}}.$$

This expression is combined with Eqs. (1) and (2) to compute the resulting complex valued amplitude.

An image rendered of the tested scene is shown in Fig. 4. It consists of a wooden Cornell box with a square area light on the ceiling, containing two brick-textured cuboids, a sphere, and a mesh of the Stanford dragon. The ${K_d}$ term varies depending on the texture coordinates of the polygons, and the wooden textured walls also have a pixel-dependent specular exponent $\alpha$ to simulate the variable surface roughness. Specular highlights, soft shadows, and global illumination are visible.

 figure: Fig. 4.

Fig. 4. Rendered views from a simulation environment. (a) Perspective view of the 3D scene. (b) Depth map showing the $z$ coordinate ranging from zero (black) up to $R \cdot p$ (white).

Download Full Size | PPT Slide | PDF

We generated a hologram with a wavelength of $\lambda = 633\;{\rm nm}$, a pixel pitch of $p = 2\;{\unicode{x00B5}{\rm m}}$, and a resolution of $4096 \times 4096$ pixels. The algorithm ran on a machine with an Intel Xeon E5-2687 W v4 processor 3 GHz, 256 GB of RAM, and a NVIDIA TITAN RTX running on Windows 10 OS. The program was implemented in C++ 17, with CUDA 11.0 and OptiX 7.1. The scene was sampled with $N = 2 \cdot {10^6}$ points, each whose associated BRDF was calculated using 64 rays sampling the area light (subdivided into $8 \times 8$ squares) and 128 randomly sampled rays on the hemisphere for global illumination. The total calculation time was 1 h and 50 min. Figure 5 illustrates two reconstructed holographic views of the generated CGH, numerically and optically.

 figure: Fig. 5.

Fig. 5. Two views of the hologram: (a) numerically reconstructed and (b) optical reconstruction recorded on an optical setup.

Download Full Size | PPT Slide | PDF

For validation, additional numerical and optical experiments were made. In Visualization 1, a video is shown numerically rendering views from multiple viewpoints by scanning an aperture horizontally in Fourier space of a $16\; {\rm K}\times 16\; {\rm K}$ hologram, followed by an angular spectrum method backpropagation [24].

Optical reconstructions were made with a Fourier holographic display [25] employing a 4 K spatial light modulator (HoloEye GAEA 2, pixel ${\rm pitch} = 3.75\; \unicode{x00B5}{\rm m}$, resolution ${3840 \times 2160\; \rm pixels}$) and a red laser source ($\lambda = 632.8\;{\rm nm} $). Visualization 2 presents continuous change of views in horizontal and vertical directions, captured with a zoom lens camera. For the reconstruction, the views were extracted from the generated CGH using a developed processing tool based on the angular spectrum method and the confocal Fresnel propagation method [25]. In the display setup, the hologram plane was located at 600 mm, while the dragon was placed at 788 mm from the viewing window of the display. In the original scene geometry, the mutual distance was only 23 mm.

We propose a new CGH algorithm for achieving photorealism without compromising depth cues or needing view discretization, combining the accuracy of point-cloud based CGH algorithms with the realistic shading of ray-tracing solutions in computer graphics. We demonstrate its feasibility by implementing it on GPU. This algorithm can be used to render high-quality holographic movies and can serve to create reference CGH renderings. The latter could be used to have a reference to compare how much the quality is affected in fast but approximate CGH algorithms. Furthermore, it can be useful for testing codecs with representative input data for efficiently compressing holograms. Future work will focus on accelerating the algorithm by optimizing the GPU code, more advanced scene representations, and leveraging the smoothness of $B({\textbf{x}},{\textbf{p}})$. Another goal is to further expand the range of supported lighting effects, material complexity, and BRDF representations.

Funding

Fonds Wetenschappelijk Onderzoek (12ZQ220N); Narodowe Centrum Nauki Poland (UMO-2018/31/B/ST7/02980); WUT.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper publicly available in Ref. [26].

REFERENCES

1. D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019). [CrossRef]  

2. J.-H. Park, J. Inf. Disp. 18, 1 (2017). [CrossRef]  

3. M. E. Lucente, J. Electron. Imaging 2, 28 (1993). [CrossRef]  

4. S.-C. Kim and E.-S. Kim, Appl. Opt. 47, D55 (2008). [CrossRef]  

5. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, Opt. Express 19, 15205 (2011). [CrossRef]  

6. H. Kim, J. Hahn, and B. Lee, Appl. Opt. 47, D117 (2008). [CrossRef]  

7. K. Matsushima and S. Nakahara, Appl. Opt. 48, H54 (2009). [CrossRef]  

8. H. Nishi and K. Matsushima, Appl. Opt. 56, F37 (2017). [CrossRef]  

9. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, Opt. Express 23, 25440 (2015). [CrossRef]  

10. A. Symeonidou, D. Blinder, and P. Schelkens, Opt. Express 26, 10282 (2018). [CrossRef]  

11. A. Gilles and P. Gioia, Appl. Opt. 57, 8508 (2018). [CrossRef]  

12. T. Yatagai, Appl. Opt. 15, 2722 (1976). [CrossRef]  

13. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, Proc. SPIE 1914, 25 (1993). [CrossRef]  

14. R. H.-Y. Chen and T. D. Wilkinson, Appl. Opt. 48, 4246 (2009). [CrossRef]  

15. K. Wakunami, H. Yamashita, and M. Yamaguchi, Opt. Express 21, 21811 (2013). [CrossRef]  

16. H. Zhang, Y. Zhao, L. Cao, and G. Jin, Opt. Express 23, 3901 (2015). [CrossRef]  

17. S. Igarashi, T. Nakamura, K. Matsushima, and M. Yamaguchi, Opt. Express 26, 10773 (2018). [CrossRef]  

18. Y. Yamamoto, H. Nakayama, N. Takada, T. Nishitsuji, T. Sugie, T. Kakue, T. Shimobaba, and T. Ito, Opt. Express 26, 34259 (2018). [CrossRef]  

19. H. Kang, T. Yamaguchi, and H. Yoshikawa, Appl. Opt. 47, D44 (2008). [CrossRef]  

20. T. Shimobaba and T. Ito, Opt. Express 25, 77 (2017). [CrossRef]  

21. D. Blinder, Opt. Express 27, 23124 (2019). [CrossRef]  

22. T. Kurihara and Y. Takaki, Opt. Express 20, 3529 (2012). [CrossRef]  

23. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, Appl. Opt. 52, A201 (2013). [CrossRef]  

24. J. W. Goodman, Introduction to Fourier Optics (W. H. Freeman and Company, 2017).

25. T. Kozacki, M. Chlipala, and P. L. Makowski, Opt. Express 26, 12144 (2018). [CrossRef]  

26. D. Blinder, M. Chlipala, T. Kozacki, and P. Schelkens, "Holographic database," Interfere (2021), www.erc-interfere.eu/downloads.html.

References

  • View by:
  • |
  • |
  • |

  1. D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
    [Crossref]
  2. J.-H. Park, J. Inf. Disp. 18, 1 (2017).
    [Crossref]
  3. M. E. Lucente, J. Electron. Imaging 2, 28 (1993).
    [Crossref]
  4. S.-C. Kim and E.-S. Kim, Appl. Opt. 47, D55 (2008).
    [Crossref]
  5. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, Opt. Express 19, 15205 (2011).
    [Crossref]
  6. H. Kim, J. Hahn, and B. Lee, Appl. Opt. 47, D117 (2008).
    [Crossref]
  7. K. Matsushima and S. Nakahara, Appl. Opt. 48, H54 (2009).
    [Crossref]
  8. H. Nishi and K. Matsushima, Appl. Opt. 56, F37 (2017).
    [Crossref]
  9. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, Opt. Express 23, 25440 (2015).
    [Crossref]
  10. A. Symeonidou, D. Blinder, and P. Schelkens, Opt. Express 26, 10282 (2018).
    [Crossref]
  11. A. Gilles and P. Gioia, Appl. Opt. 57, 8508 (2018).
    [Crossref]
  12. T. Yatagai, Appl. Opt. 15, 2722 (1976).
    [Crossref]
  13. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, Proc. SPIE 1914, 25 (1993).
    [Crossref]
  14. R. H.-Y. Chen and T. D. Wilkinson, Appl. Opt. 48, 4246 (2009).
    [Crossref]
  15. K. Wakunami, H. Yamashita, and M. Yamaguchi, Opt. Express 21, 21811 (2013).
    [Crossref]
  16. H. Zhang, Y. Zhao, L. Cao, and G. Jin, Opt. Express 23, 3901 (2015).
    [Crossref]
  17. S. Igarashi, T. Nakamura, K. Matsushima, and M. Yamaguchi, Opt. Express 26, 10773 (2018).
    [Crossref]
  18. Y. Yamamoto, H. Nakayama, N. Takada, T. Nishitsuji, T. Sugie, T. Kakue, T. Shimobaba, and T. Ito, Opt. Express 26, 34259 (2018).
    [Crossref]
  19. H. Kang, T. Yamaguchi, and H. Yoshikawa, Appl. Opt. 47, D44 (2008).
    [Crossref]
  20. T. Shimobaba and T. Ito, Opt. Express 25, 77 (2017).
    [Crossref]
  21. D. Blinder, Opt. Express 27, 23124 (2019).
    [Crossref]
  22. T. Kurihara and Y. Takaki, Opt. Express 20, 3529 (2012).
    [Crossref]
  23. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, Appl. Opt. 52, A201 (2013).
    [Crossref]
  24. J. W. Goodman, Introduction to Fourier Optics (W. H. Freeman and Company, 2017).
  25. T. Kozacki, M. Chlipala, and P. L. Makowski, Opt. Express 26, 12144 (2018).
    [Crossref]
  26. D. Blinder, M. Chlipala, T. Kozacki, and P. Schelkens, "Holographic database," Interfere (2021), www.erc-interfere.eu/downloads.html .

2019 (2)

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

D. Blinder, Opt. Express 27, 23124 (2019).
[Crossref]

2018 (5)

2017 (3)

2015 (2)

2013 (2)

2012 (1)

2011 (1)

2009 (2)

2008 (3)

1993 (2)

M. E. Lucente, J. Electron. Imaging 2, 28 (1993).
[Crossref]

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, Proc. SPIE 1914, 25 (1993).
[Crossref]

1976 (1)

Ahar, A.

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

Bettens, S.

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

Birnbaum, T.

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

Blinder, D.

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

D. Blinder, Opt. Express 27, 23124 (2019).
[Crossref]

A. Symeonidou, D. Blinder, and P. Schelkens, Opt. Express 26, 10282 (2018).
[Crossref]

Cao, L.

Chen, R. H.-Y.

Cheung, W.-K.

Chlipala, M.

Gilles, A.

Gioia, P.

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (W. H. Freeman and Company, 2017).

Hahn, J.

Honda, T.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, Proc. SPIE 1914, 25 (1993).
[Crossref]

Hoshino, H.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, Proc. SPIE 1914, 25 (1993).
[Crossref]

Ichikawa, T.

Igarashi, S.

Ito, T.

Jin, G.

Kakue, T.

Kang, H.

Kim, E.-S.

Kim, H.

Kim, S.-C.

Kong, D.

Kozacki, T.

Kurihara, T.

Lee, B.

Lucente, M. E.

M. E. Lucente, J. Electron. Imaging 2, 28 (1993).
[Crossref]

Makowski, P. L.

Matsushima, K.

Nakahara, S.

Nakamura, T.

Nakayama, H.

Nishi, H.

Nishitsuji, T.

Ohyama, N.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, Proc. SPIE 1914, 25 (1993).
[Crossref]

Ottevaere, H.

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

Park, J.-H.

J.-H. Park, J. Inf. Disp. 18, 1 (2017).
[Crossref]

Poon, T.-C.

Sakamoto, Y.

Schelkens, P.

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

A. Symeonidou, D. Blinder, and P. Schelkens, Opt. Express 26, 10282 (2018).
[Crossref]

Schretter, C.

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

Shimobaba, T.

Sugie, T.

Symeonidou, A.

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

A. Symeonidou, D. Blinder, and P. Schelkens, Opt. Express 26, 10282 (2018).
[Crossref]

Takada, N.

Takaki, Y.

Tsang, P.

Wakunami, K.

Wilkinson, T. D.

Yamaguchi, K.

Yamaguchi, M.

Yamaguchi, T.

Yamamoto, Y.

Yamashita, H.

Yatagai, T.

Yoshikawa, H.

Zhang, H.

Zhao, Y.

Zhou, C.

Appl. Opt. (9)

J. Electron. Imaging (1)

M. E. Lucente, J. Electron. Imaging 2, 28 (1993).
[Crossref]

J. Inf. Disp. (1)

J.-H. Park, J. Inf. Disp. 18, 1 (2017).
[Crossref]

Opt. Express (11)

Proc. SPIE (1)

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, Proc. SPIE 1914, 25 (1993).
[Crossref]

Signal Process. Image (1)

D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, Signal Process. Image 70, 114 (2019).
[Crossref]

Other (2)

J. W. Goodman, Introduction to Fourier Optics (W. H. Freeman and Company, 2017).

D. Blinder, M. Chlipala, T. Kozacki, and P. Schelkens, "Holographic database," Interfere (2021), www.erc-interfere.eu/downloads.html .

Supplementary Material (2)

NameDescription
» Visualization 1       This video shows 180 numerically rendered views from multiple viewpoints by scanning an aperture horizontally in Fourier space of a 16K×16K hologram, followed by an angular spectrum method backpropagation.
» Visualization 2       Optical reconstructions of the hologram were made with a Fourier holographic display employing a 4K SLM (HoloEye GAEA 2, pixel pitch = 3.75 um, resolution 3840 × 2160 pixels) and a red laser source (632.8 nm).

Data Availability

Data underlying the results presented in this paper publicly available in Ref. [26].

26. D. Blinder, M. Chlipala, T. Kozacki, and P. Schelkens, "Holographic database," Interfere (2021), www.erc-interfere.eu/downloads.html.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Diagram of how the direction of ray-tracing affects the calculated hologram. (a) Forward ray tracing. Rays are traced from a single point to all hologram pixels, leading to a coherent PSF. (b) Backward ray tracing. The individual hogels in stereogram methods trace rays forward that do not exactly hit the same point in the 3D scene, leading to discontinuities across hogels and degrading visual quality.
Fig. 2.
Fig. 2. Examples of rendered views using (a) conventional point-based CGH and (b) stereogram CGH.
Fig. 3.
Fig. 3. Illustration of the rendering phases. (a) Scene geometry: sampling rays originate from the aliasing-free cone tip. (b) Sampling calculation phase: randomly shot “piercing” objects to avoid undersampling of partially occluded surfaces. (c) BRDF calculation phase: red rays are those traced to the light source, and blue ones are randomly sampled from the hemisphere around ${\textbf{p}}$ to obtain global illumination. Only rays lying in the cone of interest, given by Eq. (6), are added to $L^\prime ({\textbf{x}})$. The other (dotted) rays are omitted. (d) PSF calculation phase: visibility rays are traced from the point to the hologram plane within the aliasing-free PSF region. $A({\textbf{x}},{\textbf{p}})$ is computed only for pixels corresponding to non-occluded rays.
Fig. 4.
Fig. 4. Rendered views from a simulation environment. (a) Perspective view of the 3D scene. (b) Depth map showing the $z$ coordinate ranging from zero (black) up to $R \cdot p$ (white).
Fig. 5.
Fig. 5. Two views of the hologram: (a) numerically reconstructed and (b) optical reconstruction recorded on an optical setup.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

$$H(\textbf{p})=\iint_{S}A(\textbf{x},\textbf{p})\exp (ik\parallel \textbf{p}-\textbf{x}\parallel )\,\textbf{n}\cdot \langle \textbf{p}-\textbf{x}\rangle {\rm d}\textbf{x},$$
$$A({\textbf{x}},{\textbf{p}}) = B({\textbf{x}},{\textbf{p}}) \cdot \Phi ({\textbf{x}}),$$
$${\textbf{u}} \cdot {\textbf{n}} \lt \sin ({\theta _{{\rm max}}}) = \lambda {\nu _{{\rm max}}} = \frac{\lambda}{{2p}}$$
$${B_d}({\textbf{x}}) = \sum\limits_{\ell \in L({\textbf{x}})} \max (0,{K_d}({\textbf{x}}) \cdot (\ell \cdot {\textbf{n}})).$$
$$c = \left\{{\begin{array}{*{20}{l}}{1,}&{{\rm if}\;({\textbf{h}} \cdot {\textbf{n}}) \gt \cos ({\theta _{{\rm max}}})}\\{\cos ({\rm a \cos}({\textbf{h}} \cdot {\textbf{n}}) - {\theta _{{\rm max}}}),}&{{\rm otherwise}}\end{array}} \right..$$
$${K_s}({\textbf{x}}) \cdot \parallel \ell \parallel \cdot {c^{\alpha ({\textbf{x}})}} \gt T.$$
$$B({\textbf{x}},{\textbf{p}}) = {B_d}({\textbf{x}}) + \sum\limits_{\ell \in L^\prime ({\textbf{x}})} {K_s}({\textbf{x}}) \cdot \parallel \ell \parallel \cdot {({\textbf{h}}({\textbf{p}}) \cdot {\textbf{n}})^{\alpha ({\textbf{x}})}}.$$